- Installation
- Documentation
- Getting Started
- Connect
- Data Import
- Overview
- Data Sources
- CSV Files
- JSON Files
- Overview
- Creating JSON
- Loading JSON
- Writing JSON
- JSON Type
- JSON Functions
- Format Settings
- Installing and Loading
- SQL to / from JSON
- Caveats
- Multiple Files
- Parquet Files
- Partitioning
- Appender
- INSERT Statements
- Client APIs
- Overview
- C
- Overview
- Startup
- Configuration
- Query
- Data Chunks
- Vectors
- Values
- Types
- Prepared Statements
- Appender
- Table Functions
- Replacement Scans
- API Reference
- C++
- CLI
- Dart
- Go
- Java
- Julia
- Node.js (Neo)
- Node.js
- Python
- Overview
- Data Ingestion
- Conversion between DuckDB and Python
- DB API
- Relational API
- Function API
- Types API
- Expression API
- Spark API
- API Reference
- Known Python Issues
- R
- Rust
- Swift
- Wasm
- ADBC
- ODBC
- SQL
- Introduction
- Statements
- Overview
- ANALYZE
- ALTER TABLE
- ALTER VIEW
- ATTACH and DETACH
- CALL
- CHECKPOINT
- COMMENT ON
- COPY
- CREATE INDEX
- CREATE MACRO
- CREATE SCHEMA
- CREATE SECRET
- CREATE SEQUENCE
- CREATE TABLE
- CREATE VIEW
- CREATE TYPE
- DELETE
- DESCRIBE
- DROP
- EXPORT and IMPORT DATABASE
- INSERT
- LOAD / INSTALL
- PIVOT
- Profiling
- SELECT
- SET / RESET
- SET VARIABLE
- SUMMARIZE
- Transaction Management
- UNPIVOT
- UPDATE
- USE
- VACUUM
- Query Syntax
- SELECT
- FROM and JOIN
- WHERE
- GROUP BY
- GROUPING SETS
- HAVING
- ORDER BY
- LIMIT and OFFSET
- SAMPLE
- Unnesting
- WITH
- WINDOW
- QUALIFY
- VALUES
- FILTER
- Set Operations
- Prepared Statements
- Data Types
- Overview
- Array
- Bitstring
- Blob
- Boolean
- Date
- Enum
- Interval
- List
- Literal Types
- Map
- NULL Values
- Numeric
- Struct
- Text
- Time
- Timestamp
- Time Zones
- Union
- Typecasting
- Expressions
- Overview
- CASE Statement
- Casting
- Collations
- Comparisons
- IN Operator
- Logical Operators
- Star Expression
- Subqueries
- Functions
- Overview
- Aggregate Functions
- Array Functions
- Bitstring Functions
- Blob Functions
- Date Format Functions
- Date Functions
- Date Part Functions
- Enum Functions
- Interval Functions
- Lambda Functions
- List Functions
- Map Functions
- Nested Functions
- Numeric Functions
- Pattern Matching
- Regular Expressions
- Struct Functions
- Text Functions
- Time Functions
- Timestamp Functions
- Timestamp with Time Zone Functions
- Union Functions
- Utility Functions
- Window Functions
- Constraints
- Indexes
- Meta Queries
- DuckDB's SQL Dialect
- Overview
- Indexing
- Friendly SQL
- Keywords and Identifiers
- Order Preservation
- PostgreSQL Compatibility
- SQL Quirks
- Samples
- Configuration
- Extensions
- Overview
- Core Extensions
- Community Extensions
- Working with Extensions
- Versioning of Extensions
- Arrow
- AutoComplete
- AWS
- Azure
- Delta
- Excel
- Full Text Search
- httpfs (HTTP and S3)
- Iceberg
- ICU
- inet
- jemalloc
- MySQL
- PostgreSQL
- Spatial
- SQLite
- Substrait
- TPC-DS
- TPC-H
- VSS
- Guides
- Overview
- Data Viewers
- Database Integration
- File Formats
- Overview
- CSV Import
- CSV Export
- Directly Reading Files
- Excel Import
- Excel Export
- JSON Import
- JSON Export
- Parquet Import
- Parquet Export
- Querying Parquet Files
- Network and Cloud Storage
- Overview
- HTTP Parquet Import
- S3 Parquet Import
- S3 Parquet Export
- S3 Iceberg Import
- S3 Express One
- GCS Import
- Cloudflare R2 Import
- DuckDB over HTTPS / S3
- Meta Queries
- Describe Table
- EXPLAIN: Inspect Query Plans
- EXPLAIN ANALYZE: Profile Queries
- List Tables
- Summarize
- DuckDB Environment
- ODBC
- Performance
- Overview
- Environment
- Import
- Schema
- Indexing
- Join Operations
- File Formats
- How to Tune Workloads
- My Workload Is Slow
- Benchmarks
- Python
- Installation
- Executing SQL
- Jupyter Notebooks
- SQL on Pandas
- Import from Pandas
- Export to Pandas
- Import from Numpy
- Export to Numpy
- SQL on Arrow
- Import from Arrow
- Export to Arrow
- Relational API on Pandas
- Multiple Python Threads
- Integration with Ibis
- Integration with Polars
- Using fsspec Filesystems
- SQL Editors
- SQL Features
- Snippets
- Glossary of Terms
- Browsing Offline
- Operations Manual
- Overview
- Limits
- Non-Deterministic Behavior
- Embedding DuckDB
- DuckDB's Footprint
- Securing DuckDB
- Development
- DuckDB Repositories
- Testing
- Overview
- sqllogictest Introduction
- Writing Tests
- Debugging
- Result Verification
- Persistent Testing
- Loops
- Multiple Connections
- Catch
- Profiling
- Release Calendar
- Building DuckDB
- Overview
- Build Configuration
- Building Extensions
- Android
- Linux
- macOS
- Raspberry Pi
- Windows
- Troubleshooting
- Benchmark Suite
- Internals
- Sitemap
- Why DuckDB
- Media
- FAQ
- Code of Conduct
- Live Demo
Timestamps represent points in time. As such, they combine DATE
and TIME
information.
They can be created using the type name followed by a string formatted according to the ISO 8601 format, YYYY-MM-DD hh:mm:ss[.zzzzzzzzz][+-TT[:tt]]
, which is also the format we use in this documentation. Decimal places beyond the supported precision are ignored.
Timestamp Types
Name | Aliases | Description |
---|---|---|
TIMESTAMP_NS |
Naive timestamp with nanosecond precision | |
TIMESTAMP |
DATETIME , TIMESTAMP WITHOUT TIME ZONE |
Naive timestamp with microsecond precision |
TIMESTAMP_MS |
Naive timestamp with millisecond precision | |
TIMESTAMP_S |
Naive timestamp with second precision | |
TIMESTAMPTZ |
TIMESTAMP WITH TIME ZONE |
Time zone aware timestamp with microsecond precision |
Warning Since there is not currently a
TIMESTAMP_NS WITH TIME ZONE
data type, external columns with nanosecond precision andWITH TIME ZONE
semantics, e.g., Parquet timestamp columns withisAdjustedToUTC=true
, are converted toTIMESTAMP WITH TIME ZONE
and thus lose precision when read using DuckDB.
SELECT TIMESTAMP_NS '1992-09-20 11:30:00.123456789';
1992-09-20 11:30:00.123456789
SELECT TIMESTAMP '1992-09-20 11:30:00.123456789';
1992-09-20 11:30:00.123456
SELECT TIMESTAMP_MS '1992-09-20 11:30:00.123456789';
1992-09-20 11:30:00.123
SELECT TIMESTAMP_S '1992-09-20 11:30:00.123456789';
1992-09-20 11:30:00
SELECT TIMESTAMPTZ '1992-09-20 11:30:00.123456789';
1992-09-20 11:30:00.123456+00
SELECT TIMESTAMPTZ '1992-09-20 12:30:00.123456789+01:00';
1992-09-20 11:30:00.123456+00
DuckDB distinguishes timestamps WITHOUT TIME ZONE
and WITH TIME ZONE
(of which the only current representative is TIMESTAMP WITH TIME ZONE
).
Despite the name, a TIMESTAMP WITH TIME ZONE
does not store time zone information. Instead, it only stores the INT64
number of non-leap microseconds since the Unix epoch 1970-01-01 00:00:00+00
, and thus unambiguously identifies a point in absolute time, or instant. The reason for the labels time zone aware and WITH TIME ZONE
is that timestamp arithmetic, binning, and string formatting for this type are performed in a configured time zone, which defaults to the system time zone and is just UTC+00:00
in the examples above.
The corresponding TIMESTAMP WITHOUT TIME ZONE
stores the same INT64
, but arithmetic, binning, and string formatting follow the straightforward rules of Coordinated Universal Time (UTC) without offsets or time zones. Accordingly, TIMESTAMP
s could be interpreted as UTC timestamps, but more commonly they are used to represent local observations of time recorded in an unspecified time zone, and operations on these types can be interpreted as simply manipulating tuple fields following nominal temporal logic.
It is a common data cleaning problem to disambiguate such observations, which may also be stored in raw strings without time zone specification or UTC offsets, into unambiguous TIMESTAMP WITH TIME ZONE
instants. One possible solution to this is to append UTC offsets to strings, followed by an explicit cast to TIMESTAMP WITH TIME ZONE
. Alternatively, a TIMESTAMP WITHOUT TIME ZONE
may be created first and then be combined with a time zone specification to obtain a time zone aware TIMESTAMP WITH TIME ZONE
.
Conversion Between Strings And Naive And Time Zone-Aware Timestamps
The conversion between strings without UTC offsets or IANA time zone names and WITHOUT TIME ZONE
types is unambiguous and straightforward.
The conversion between strings with UTC offsets or time zone names and WITH TIME ZONE
types is also unambiguous, but requires the ICU
extension to handle time zone names.
When strings without UTC offsets or time zone names are converted to a WITH TIME ZONE
type, the string is interpreted in the configured time zone. Conversely, when strings with UTC offsets are passed to a WITHOUT TIME ZONE
type, the local time in the configured time zone at the instant specified by the string is stored.
Finally, when WITH TIME ZONE
and WITHOUT TIME ZONE
types are converted to each other via explicit or implicit casts, the translation uses the configured time zone. To use an alternative time zone, the timezone
function provided by the ICU
extension may be used:
SELECT
timezone('America/Denver', TIMESTAMP '2001-02-16 20:38:40') AS aware1,
timezone('America/Denver', TIMESTAMPTZ '2001-02-16 04:38:40') AS naive1,
timezone('UTC', TIMESTAMP '2001-02-16 20:38:40+00:00') AS aware2,
timezone('UTC', TIMESTAMPTZ '2001-02-16 04:38:40 Europe/Berlin') AS naive2;
aware1 | naive1 | aware2 | naive2 |
---|---|---|---|
2001-02-17 04:38:40+01 | 2001-02-15 20:38:40 | 2001-02-16 21:38:40+01 | 2001-02-16 03:38:40 |
Note that TIMESTAMP
s are displayed without time zone specification in the results, following ISO 8601 rules for local times, while time-zone aware TIMESTAMPTZ
s are displayed with the UTC offset of the configured time zone, which is 'Europe/Berlin'
in the example. The UTC offsets of 'America/Denver'
and 'Europe/Berlin'
at all involved instants are -07:00
and +01:00
, respectively.
Special Values
Three special strings can be used to create timestamps:
Input string | Description |
---|---|
epoch |
1970-01-01 00:00:00[+00] (Unix system time zero) |
infinity |
Later than all other timestamps |
-infinity |
Earlier than all other timestamps |
The values infinity
and -infinity
are special cased and are displayed unchanged, whereas the value epoch
is simply a notational shorthand that is converted to the corresponding timestamp value when read.
SELECT '-infinity'::TIMESTAMP, 'epoch'::TIMESTAMP, 'infinity'::TIMESTAMP;
Negative | Epoch | Positive |
---|---|---|
-infinity | 1970-01-01 00:00:00 | infinity |
Functions
See Timestamp Functions.
Time Zones
To understand time zones and the WITH TIME ZONE
types, it helps to start with two concepts: instants and temporal binning.
Instants
An instant is a point in absolute time, usually given as a count of some time increment from a fixed point in time (called the epoch). This is similar to how positions on the earth's surface are given using latitude and longitude relative to the equator and the Greenwich Meridian. In DuckDB, the fixed point is the Unix epoch 1970-01-01 00:00:00+00:00
, and the increment is in seconds, milliseconds, microseconds, or nanoseconds, depending on the specific data type.
Temporal Binning
Binning is a common practice with continuous data: A range of possible values is broken up into contiguous subsets and the binning operation maps actual values to the bin they fall into. Temporal binning is simply applying this practice to instants; for example, by binning instants into years, months, and days.
Temporal binning rules are complex, and generally come in two sets: time zones and calendars.
For most tasks, the calendar will just be the widely used Gregorian calendar,
but time zones apply locale-specific rules and can vary widely.
For example, here is what binning for the 'America/Los_Angeles'
time zone looks like near the epoch:
The most common temporal binning problem occurs when daylight savings time changes. The example below contains a daylight savings time change where the "hour" bin is two hours long. To distinguish the two hours, another range of bins containing the offset from UTC is needed:
Time Zone Support
The TIMESTAMPTZ
type can be binned into calendar and clock bins using a suitable extension.
The built-in ICU extension implements all the binning and arithmetic functions using the
International Components for Unicode time zone and calendar functions.
To set the time zone to use, first load the ICU extension. The ICU extension comes pre-bundled with several DuckDB clients (including Python, R, JDBC, and ODBC), so this step can be skipped in those cases. In other cases you might first need to install and load the ICU extension.
INSTALL icu;
LOAD icu;
Next, use the SET TimeZone
command:
SET TimeZone = 'America/Los_Angeles';
Time binning operations for TIMESTAMPTZ
will then be implemented using the given time zone.
A list of available time zones can be pulled from the pg_timezone_names()
table function:
SELECT
name,
abbrev,
utc_offset
FROM pg_timezone_names()
ORDER BY
name;
You can also find a reference table of available time zones.
Calendar Support
The ICU extension also supports non-Gregorian calendars using the SET Calendar
command.
Note that the INSTALL
and LOAD
steps are only required if the DuckDB client does not bundle the ICU extension.
INSTALL icu;
LOAD icu;
SET Calendar = 'japanese';
Time binning operations for TIMESTAMPTZ
will then be implemented using the given calendar.
In this example, the era
part will now report the Japanese imperial era number.
A list of available calendars can be pulled from the icu_calendar_names()
table function:
SELECT name
FROM icu_calendar_names()
ORDER BY 1;
Settings
The current value of the TimeZone
and Calendar
settings are determined by ICU when it starts up.
They can be queried from in the duckdb_settings()
table function:
SELECT *
FROM duckdb_settings()
WHERE name = 'TimeZone';
name | value | description | input_type |
---|---|---|---|
TimeZone | Europe/Amsterdam | The current time zone | VARCHAR |
SELECT *
FROM duckdb_settings()
WHERE name = 'Calendar';
name | value | description | input_type |
---|---|---|---|
Calendar | gregorian | The current calendar | VARCHAR |
If you find that your binning operations are not behaving as you expect, check the
TimeZone
andCalendar
values and adjust them if needed.