- Installation
- Guides
- Overview
- Data Import & Export
- Overview
- CSV Import
- CSV Export
- Parquet Import
- Parquet Export
- Querying Parquet Files
- HTTP Parquet Import
- S3 Parquet Import
- S3 Parquet Export
- S3 Iceberg Import
- S3 Express One
- GCS Import
- Cloudflare R2 Import
- JSON Import
- JSON Export
- Excel Import
- Excel Export
- MySQL Import
- PostgreSQL Import
- SQLite Import
- Directly Reading Files
- Performance
- Overview
- Schema
- Indexing
- Environment
- File Formats
- How to Tune Workloads
- My Workload Is Slow
- Benchmarks
- Meta Queries
- Describe Table
- EXPLAIN: Inspect Query Plans
- EXPLAIN ANALYZE: Profile Queries
- List Tables
- Summarize
- DuckDB Environment
- ODBC
- Python
- Installation
- Executing SQL
- Jupyter Notebooks
- SQL on Pandas
- Import from Pandas
- Export to Pandas
- SQL on Arrow
- Import from Arrow
- Export to Arrow
- Relational API on Pandas
- Multiple Python Threads
- Integration with Ibis
- Integration with Polars
- Using fsspec Filesystems
- SQL Features
- SQL Editors
- Data Viewers
- Documentation
- Overview
- Connect
- Data Import
- Overview
- CSV Files
- JSON Files
- Multiple Files
- Parquet Files
- Partitioning
- Appender
- INSERT Statements
- Client APIs
- Overview
- C
- Overview
- Startup
- Configuration
- Query
- Data Chunks
- Values
- Types
- Prepared Statements
- Appender
- Table Functions
- Replacement Scans
- API Reference
- C++
- CLI
- Go
- Java
- Julia
- Node.js
- Python
- Overview
- Data Ingestion
- Conversion between DuckDB and Python
- DB API
- Relational API
- Function API
- Types API
- Expression API
- Spark API
- API Reference
- Known Python Issues
- R
- Rust
- Swift
- Wasm
- ADBC
- ODBC
- Configuration
- SQL
- Introduction
- Statements
- Overview
- ALTER TABLE
- ALTER VIEW
- ATTACH/DETACH
- CALL
- CHECKPOINT
- COMMENT ON
- COPY
- CREATE INDEX
- CREATE MACRO
- CREATE SCHEMA
- CREATE SECRET
- CREATE SEQUENCE
- CREATE TABLE
- CREATE VIEW
- CREATE TYPE
- DELETE
- DROP
- EXPORT/IMPORT DATABASE
- INSERT
- PIVOT
- Profiling
- SELECT
- SET/RESET
- Transaction Management
- UNPIVOT
- UPDATE
- USE
- VACUUM
- Query Syntax
- SELECT
- FROM & JOIN
- WHERE
- GROUP BY
- GROUPING SETS
- HAVING
- ORDER BY
- LIMIT
- SAMPLE
- Unnesting
- WITH
- WINDOW
- QUALIFY
- VALUES
- FILTER
- Set Operations
- Prepared Statements
- Data Types
- Overview
- Array
- Bitstring
- Blob
- Boolean
- Date
- Enum
- Interval
- List
- Literal Types
- Map
- NULL Values
- Numeric
- Struct
- Text
- Time
- Timestamp
- Time Zones
- Union
- Typecasting
- Expressions
- Overview
- CASE Statement
- Casting
- Collations
- Comparisons
- IN Operator
- Logical Operators
- Star Expression
- Subqueries
- Functions
- Overview
- Bitstring Functions
- Blob Functions
- Date Format Functions
- Date Functions
- Date Part Functions
- Enum Functions
- Interval Functions
- Lambda Functions
- Nested Functions
- Numeric Functions
- Pattern Matching
- Regular Expressions
- Text Functions
- Time Functions
- Timestamp Functions
- Timestamp with Time Zone Functions
- Utility Functions
- Aggregate Functions
- Constraints
- Indexes
- Information Schema
- Metadata Functions
- Keywords and Identifiers
- Samples
- Window Functions
- Extensions
- Development
- DuckDB Repositories
- Testing
- Overview
- Writing Tests
- sqllogictest
- Debugging
- Result Verification
- Persistent Testing
- Loops
- Multiple Connections
- Catch
- Internals Overview
- Storage Versions & Format
- Execution Format
- Profiling
- Release Calendar
- Building
- Benchmark Suite
- Sitemap
- Why DuckDB
- Media
- FAQ
- Code of Conduct
- Live Demo
The EXPORT DATABASE
command allows you to export the contents of the database to a specific directory. The IMPORT DATABASE
command allows you to then read the contents again.
Examples
-- export the database to the target directory 'target_directory' as CSV files
EXPORT DATABASE 'target_directory';
-- export to directory 'target_directory',
-- using the given options for the CSV serialization
EXPORT DATABASE 'target_directory' (FORMAT CSV, DELIMITER '|');
-- export to directory 'target_directory', tables serialized as Parquet
EXPORT DATABASE 'target_directory' (FORMAT PARQUET);
-- export to directory 'target_directory', tables serialized as Parquet,
-- compressed with ZSTD, with a row_group_size of 100,000
EXPORT DATABASE 'target_directory' (
FORMAT PARQUET,
COMPRESSION ZSTD,
ROW_GROUP_SIZE 100_000
);
-- reload the database again
IMPORT DATABASE 'source_directory';
-- alternatively, use a PRAGMA
PRAGMA import_database('source_directory');
For details regarding the writing of Parquet files, see the Parquet Files page in the Data Import section, and the COPY
Statement page.
EXPORT DATABASE
The EXPORT DATABASE
command exports the full contents of the database - including schema information, tables, views and sequences - to a specific directory that can then be loaded again. The created directory will be structured as follows:
target_directory/schema.sql
target_directory/load.sql
target_directory/t_1.csv
...
target_directory/t_n.csv
The schema.sql
file contains the schema statements that are found in the database. It contains any CREATE SCHEMA
, CREATE TABLE
, CREATE VIEW
and CREATE SEQUENCE
commands that are necessary to re-construct the database.
The load.sql
file contains a set of COPY
statements that can be used to read the data from the CSV files again. The file contains a single COPY
statement for every table found in the schema.
Syntax
IMPORT DATABASE
The database can be reloaded by using the IMPORT DATABASE
command again, or manually by running schema.sql
followed by load.sql
to re-load the data.
Syntax
About this page
Last modified: 2024-03-19