- Installation
- Guides
- Overview
- Data Import & Export
- CSV Import
- CSV Export
- Parquet Import
- Parquet Export
- Querying Parquet Files
- HTTP Parquet Import
- S3 Parquet Import
- S3 Parquet Export
- S3 Iceberg Import
- JSON Import
- JSON Export
- Excel Import
- Excel Export
- SQLite Import
- PostgreSQL Import
- Meta Queries
- ODBC
- Python
- Installation
- Execution SQL
- Jupyter Notebooks
- SQL on Pandas
- Import from Pandas
- Export to Pandas
- SQL on Arrow
- Import from Arrow
- Export to Arrow
- Relational API on Pandas
- Multiple Python Threads
- Integration with Ibis
- Integration with Polars
- Using fsspec Filesystems
- SQL Features
- SQL Editors
- Data Viewers
- Documentation
- Connect
- Data Import
- Overview
- CSV Files
- JSON Files
- Multiple Files
- Parquet Files
- Partitioning
- Appender
- INSERT Statements
- Client APIs
- Overview
- C
- Overview
- Startup
- Configuration
- Query
- Data Chunks
- Values
- Types
- Prepared Statements
- Appender
- Table Functions
- Replacement Scans
- API Reference
- C++
- CLI
- Go
- Java
- Julia
- Node.js
- Python
- Overview
- Data Ingestion
- Result Conversion
- DB API
- Relational API
- Function API
- Types API
- Expression API
- Spark API
- API Reference
- Known Python Issues
- R
- Rust
- Swift
- Wasm
- ADBC
- ODBC
- SQL
- Introduction
- Statements
- Overview
- ALTER TABLE
- ALTER VIEW
- ATTACH/DETACH
- CALL
- CHECKPOINT
- COPY
- CREATE INDEX
- CREATE MACRO
- CREATE SCHEMA
- CREATE SEQUENCE
- CREATE TABLE
- CREATE VIEW
- CREATE TYPE
- DELETE
- DROP
- EXPORT/IMPORT DATABASE
- INSERT
- PIVOT
- Profiling
- SELECT
- SET/RESET
- Transaction Management
- UNPIVOT
- UPDATE
- USE
- VACUUM
- Query Syntax
- SELECT
- FROM & JOIN
- WHERE
- GROUP BY
- GROUPING SETS
- HAVING
- ORDER BY
- LIMIT
- SAMPLE
- Unnesting
- WITH
- WINDOW
- QUALIFY
- VALUES
- FILTER
- Set Operations
- Prepared Statements
- Data Types
- Overview
- Array
- Bitstring
- Blob
- Boolean
- Date
- Enum
- Interval
- List
- Map
- NULL Values
- Numeric
- Struct
- Text
- Time
- Timestamp
- Time Zones
- Union
- Expressions
- Overview
- CASE Statement
- Casting
- Collations
- Comparisons
- IN Operator
- Logical Operators
- Star Expression
- Subqueries
- Functions
- Overview
- Bitstring Functions
- Blob Functions
- Date Format Functions
- Date Functions
- Date Part Functions
- Enum Functions
- Interval Functions
- Nested Functions
- Numeric Functions
- Pattern Matching
- Text Functions
- Time Functions
- Timestamp Functions
- Timestamp with Time Zone Functions
- Utility Functions
- Aggregate Functions
- Configuration
- Constraints
- Indexes
- Information Schema
- Metadata Functions
- Pragmas
- Rules for Case Sensitivity
- Samples
- Window Functions
- Extensions
- Development
- DuckDB Repositories
- Testing
- Internals Overview
- Storage Versions & Format
- Execution Format
- Profiling
- Release Dates
- Building
- Benchmark Suite
- Sitemap
- Why DuckDB
- Media
- FAQ
- Code of Conduct
- Live Demo
The DuckDB internal storage format is currently in flux, and is expected to change with each release until we reach v1.0.0.
How to Move Between Storage Formats
When you update DuckDB and open a database file, you might encounter an error message about incompatible storage formats, pointing to this page. To move your database(s) to newer format you only need the older and the newer DuckDB executable.
Open your database file with the older DuckDB and run the SQL statement EXPORT DATABASE 'tmp'
. This allows you to save the whole state of the current database in use inside folder tmp
.
The content of the tmp
folder will be overridden, so choose an empty/non yet existing location. Then, start the newer DuckDB and execute IMPORT DATABASE 'tmp'
(pointing to the previously populated folder) to load the database, which can be then saved to the file you pointed DuckDB to.
A bash two-liner (to be adapted with the file names and executable locations) is:
$ /older/version/duckdb mydata.db -c "EXPORT DATABASE 'tmp'"
$ /newer/duckdb mydata.new.db -c "IMPORT DATABASE 'tmp'"
After this mydata.db
will be untouched with the old format, mydata.new.db
will contain the same data but in a format accessible from more recent DuckDB, and folder tmp
will old the same data in an universal format as different files.
Check EXPORT
documentation for more details on the syntax.
Storage Header
DuckDB files start with a uint64_t
which contains a checksum for the main header, followed by four magic bytes (DUCK
), followed by the storage version number in a uint64_t
.
$ hexdump -n 20 -C mydata.db
00000000 01 d0 e2 63 9c 13 39 3e 44 55 43 4b 2b 00 00 00 |...c..9>DUCK+...|
00000010 00 00 00 00 |....|
00000014
A simple example of reading the storage version using Python is below.
import struct
pattern = struct.Struct('<8x4sQ')
with open('test/sql/storage_version/storage_version.db', 'rb') as fh:
print(pattern.unpack(fh.read(pattern.size)))
Storage Version Table
For changes in each given release, check out the change log on GitHub. To see the commits that changed each storage version, see the commit log.
Storage version | DuckDB version(s) |
---|---|
64 | v0.9.0, v0.9.1, v0.9.2 |
51 | v0.8.0, v0.8.1 |
43 | v0.7.0, v0.7.1 |
39 | v0.6.0, v0.6.1 |
38 | v0.5.0, v0.5.1 |
33 | v0.3.3, v0.3.4, v0.4.0 |
31 | v0.3.2 |
27 | v0.3.1 |
25 | v0.3.0 |
21 | v0.2.9 |
18 | v0.2.8 |
17 | v0.2.7 |
15 | v0.2.6 |
13 | v0.2.5 |
11 | v0.2.4 |
6 | v0.2.3 |
4 | v0.2.2 |
1 | v0.2.1 and prior |
Disk Usage
The disk usage of DuckDB’s format depends on a number of factors, including the data type and the data distribution, the compression methods used, etc. As a rough approximation, loading 100 GB of uncompressed CSV files into a DuckDB database file will require 25 GB of disk space, while loading 100 GB of Parquet files will require 120 GB of disk space.