The DuckDB internal storage format is currently in flux, and is expected to change with each release until we reach v1.0.0.
When you update DuckDB and open a database file, you might encounter an error message about incompatible storage formats, pointing to this page. To move your database(s) to newer format you only need the older and the newer DuckDB executable.
Open your database file with the older DuckDB and run the SQL statement
EXPORT DATABASE 'tmp'. This allows you to save the whole state of the current database in use inside folder
The content of the
tmp folder will be overridden, so choose an empty/non yet existing location. Then, start the newer DuckDB and execute
IMPORT DATABASE 'tmp' (pointing to the previously populated folder) to load the database, which can be then saved to the file you pointed DuckDB to.
A bash two-liner (to be adapted with the file names and executable locations) is:
$ /older/version/duckdb mydata.db -c "EXPORT DATABASE 'tmp'" $ /newer/duckdb mydata.new.db -c "IMPORT DATABASE 'tmp'"
mydata.db will be untouched with the old format,
mydata.new.db will contain the same data but in a format accessible from more recent DuckDB, and folder
tmp will old the same data in an universal format as different files.
EXPORT documentation for more details on the syntax.
DuckDB files start with a
uint64_t which contains a checksum for the main header, followed by four magic bytes (
DUCK), followed by the storage version number in a
$ hexdump -n 20 -C mydata.db 00000000 01 d0 e2 63 9c 13 39 3e 44 55 43 4b 2b 00 00 00 |...c..9>DUCK+...| 00000010 00 00 00 00 |....| 00000014
A simple example of reading the storage version using Python is below.
import struct pattern = struct.Struct('<8x4sQ') with open('test/sql/storage_version/storage_version.db', 'rb') as fh: print(pattern.unpack(fh.read(pattern.size)))
|Storage version||DuckDB version(s)|
|64||v0.9.0, v0.9.1, v0.9.2|
|33||v0.3.3, v0.3.4, v0.4.0|
|1||v0.2.1 and prior|
The disk usage of DuckDB’s format depends on a number of factors, including the data type and the data distribution, the compression methods used, etc. As a rough approximation, loading 100 GB of uncompressed CSV files into a DuckDB database file will require 25 GB of disk space, while loading 100 GB of Parquet files will require 120 GB of disk space.