- Installation
- Guides
- Overview
- Data Import & Export
- CSV Import
- CSV Export
- Parquet Import
- Parquet Export
- Query Parquet
- HTTP Parquet Import
- S3 Parquet Import
- S3 Parquet Export
- JSON Import
- JSON Export
- SQLite Import
- Postgres Import
- Meta Queries
- Python
- Install
- Execute SQL
- Jupyter Notebooks
- SQL on Pandas
- Import From Pandas
- Export To Pandas
- SQL on Arrow
- Import From Arrow
- Export To Arrow
- Relational API on Pandas
- Multiple Python Threads
- DuckDB with Ibis
- DuckDB with Fugue
- DuckDB with Polars
- DuckDB with Vaex
- DuckDB with DataFusion
- DuckDB with fsspec filesystems
- SQL Editors
- Data Viewers
- Documentation
- Connect
- Data Import
- Overview
- Insert Statements
- CSV Files
- Parquet Files
- JSON Files
- Multiple Files
- Partitioning
- Appender
- Client APIs
- Overview
- Python
- R
- Java
- Julia
- C
- Overview
- Startup
- Configure
- Query
- Data Chunks
- Values
- Types
- Prepared Statements
- Appender
- Table Functions
- Replacement Scans
- API Reference
- C++
- Node.js
- Wasm
- ODBC
- CLI
- SQL
- Introduction
- Statements
- Overview
- Select
- Insert
- Delete
- Update
- Create Schema
- Create Table
- Create View
- Create Sequence
- Create Macro
- Drop
- Alter Table
- Copy
- Export
- Attach
- Query Syntax
- SELECT
- FROM & JOIN
- WHERE
- GROUP BY
- GROUPING SETS
- HAVING
- ORDER BY
- LIMIT
- SAMPLE
- UNNEST
- WITH
- WINDOW
- QUALIFY
- VALUES
- FILTER
- Set Operations
- Data Types
- Overview
- NULL Values
- Boolean
- Enum
- Numeric
- Text
- Date
- Timestamp
- Interval
- Blob
- Bitstring
- List
- Struct
- Map
- Union
- Expressions
- Functions
- Overview
- Enum Functions
- Numeric Functions
- Text Functions
- Pattern Matching
- Date Functions
- Timestamp Functions
- Timestamp With Time Zone Functions
- Time Functions
- Interval Functions
- Date Formats
- Date Parts
- Blob Functions
- Bitstring Functions
- Nested Functions
- Utility Functions
- Indexes
- Aggregates
- Window Functions
- Samples
- Information Schema
- Metadata Functions
- Configuration
- Pragmas
- Extensions
- Development
- Testing
- Storage Versions & Format
- Profiling
- Sitemap
- Why DuckDB
- FAQ
- Code of Conduct
- Live Demo
Examples
-- read data from a hive partitioned data set
SELECT * FROM parquet_scan('orders/*/*/*.parquet', hive_partitioning=1);
-- write a table to a hive partitioned data set
COPY orders TO 'orders' (FORMAT PARQUET, PARTITION_BY (year, month));
Hive Partitioning
Hive partitionining is a partitioning strategy that is used to split a table into multiple files based on partition keys. The files are organized into folders. Within each folder, the partition key has a value that is determined by the name of the folder.
Below is an example of a hive partitioned file hierarchy. The files are partitioned on two keys (year
and month
).
orders
├── year=2021
│ ├── month=1
│ │ ├── file1.parquet
│ │ └── file2.parquet
│ └── month=2
│ └── file3.parquet
└── year=2022
├── month=11
│ ├── file4.parquet
│ └── file5.parquet
└── month=12
└── file6.parquet
Files stored in this hierarchy can be read using the hive_partitioning
flag.
SELECT * FROM parquet_scan('orders/*/*/*.parquet', hive_partitioning=1);
When we specify the hive_partitioning
flag, the values of the columns will be read from the directories.
Filter Pushdown
Filters on the partition keys are automatically pushed down into the files. This way the system skips reading files that are not necessary to answer a query. For example, consider the following query on the above dataset:
SELECT *
FROM parquet_scan('orders/*/*/*.parquet', hive_partitioning=1)
WHERE year=2022 AND month=11;
When executing this query, only the following files will be read:
orders
└── year=2022
└── month=11
├── file4.parquet
└── file5.parquet
Writing Partitioned Files
See the Partitioned Writes section.