- Installation
- Guides
- Overview
- Data Import & Export
- CSV Import
- CSV Export
- Parquet Import
- Parquet Export
- Query Parquet
- HTTP Parquet Import
- S3 Parquet Import
- S3 Parquet Export
- JSON Import
- JSON Export
- SQLite Import
- Postgres Import
- Meta Queries
- Python
- Install
- Execute SQL
- Jupyter Notebooks
- SQL on Pandas
- Import From Pandas
- Export To Pandas
- SQL on Arrow
- Import From Arrow
- Export To Arrow
- Relational API on Pandas
- Multiple Python Threads
- DuckDB with Ibis
- DuckDB with Fugue
- DuckDB with Polars
- DuckDB with Vaex
- DuckDB with DataFusion
- DuckDB with fsspec filesystems
- SQL Editors
- Data Viewers
- Documentation
- Connect
- Data Import
- Overview
- Insert Statements
- CSV Files
- Parquet Files
- JSON Files
- Multiple Files
- Partitioning
- Appender
- Client APIs
- Overview
- Python
- R
- Java
- Julia
- C
- Overview
- Startup
- Configure
- Query
- Data Chunks
- Values
- Types
- Prepared Statements
- Appender
- Table Functions
- Replacement Scans
- API Reference
- C++
- Node.js
- Wasm
- ODBC
- CLI
- SQL
- Introduction
- Statements
- Overview
- Select
- Insert
- Delete
- Update
- Create Schema
- Create Table
- Create View
- Create Sequence
- Create Macro
- Drop
- Alter Table
- Copy
- Export
- Attach
- Query Syntax
- SELECT
- FROM & JOIN
- WHERE
- GROUP BY
- GROUPING SETS
- HAVING
- ORDER BY
- LIMIT
- SAMPLE
- UNNEST
- WITH
- WINDOW
- QUALIFY
- VALUES
- FILTER
- Set Operations
- Data Types
- Overview
- NULL Values
- Boolean
- Enum
- Numeric
- Text
- Date
- Timestamp
- Interval
- Blob
- Bitstring
- List
- Struct
- Map
- Union
- Expressions
- Functions
- Overview
- Enum Functions
- Numeric Functions
- Text Functions
- Pattern Matching
- Date Functions
- Timestamp Functions
- Timestamp With Time Zone Functions
- Time Functions
- Interval Functions
- Date Formats
- Date Parts
- Blob Functions
- Bitstring Functions
- Nested Functions
- Utility Functions
- Indexes
- Aggregates
- Window Functions
- Samples
- Information Schema
- Metadata Functions
- Configuration
- Pragmas
- Extensions
- Development
- Testing
- Storage Versions & Format
- Profiling
- Sitemap
- Why DuckDB
- FAQ
- Code of Conduct
- Live Demo
Examples
-- write a table to a hive partitioned data set of parquet files
COPY orders TO 'orders' (FORMAT PARQUET, PARTITION_BY (year, month));
-- write a table to a hive partitioned data set of CSV files, allowing overwrites
COPY orders TO 'orders' (FORMAT CSV, PARTITION_BY (year, month), ALLOW_OVERWRITE 1);
Partitioned Writes
When the partition_by
clause is specified for the COPY
statement, the files are written in a hive partitioned folder hierarchy. The target is the name of the root directory (in the example above: orders
). The files are written in-order in the file hierarchy. Currently, one file is written per thread to each directory.
orders
├── year=2021
│ ├── month=1
│ │ ├── data_1.parquet
│ │ └── data_2.parquet
│ └── month=2
│ └── data_1.parquet
└── year=2022
├── month=11
│ ├── data_1.parquet
│ └── data_2.parquet
└── month=12
└── data_1.parquet
The values of the partitions are automatically extracted from the data. Note that it can be very expensive to write many partitions as many files will be created. The ideal partition count depends on how large your data set is.
Writing data into many small partitions is expensive. It is generally recommended to have at least
100MB
of data per partition.
Overwriting
By default the partitioned write will not allow overwriting existing directories. Use the ALLOW_OVERWRITE
option to allow overwriting an existing directory.