- Installation
- Guides
- Overview
- Data Import & Export
- CSV Import
- CSV Export
- Parquet Import
- Parquet Export
- Querying Parquet Files
- HTTP Parquet Import
- S3 Parquet Import
- S3 Parquet Export
- S3 Iceberg Import
- JSON Import
- JSON Export
- Excel Import
- Excel Export
- MySQL Import
- PostgreSQL Import
- SQLite Import
- Performance
- Overview
- Schema
- Indexing
- Environment
- File Formats
- How to Tune Workloads
- My Workload is Slow
- Benchmarks
- Meta Queries
- ODBC
- Python
- Installation
- Execution SQL
- Jupyter Notebooks
- SQL on Pandas
- Import from Pandas
- Export to Pandas
- SQL on Arrow
- Import from Arrow
- Export to Arrow
- Relational API on Pandas
- Multiple Python Threads
- Integration with Ibis
- Integration with Polars
- Using fsspec Filesystems
- SQL Features
- SQL Editors
- Data Viewers
- Documentation
- Connect
- Data Import
- Overview
- CSV Files
- JSON Files
- Multiple Files
- Parquet Files
- Partitioning
- Appender
- INSERT Statements
- Client APIs
- Overview
- C
- Overview
- Startup
- Configuration
- Query
- Data Chunks
- Values
- Types
- Prepared Statements
- Appender
- Table Functions
- Replacement Scans
- API Reference
- C++
- CLI
- Go
- Java
- Julia
- Node.js
- Python
- Overview
- Data Ingestion
- Result Conversion
- DB API
- Relational API
- Function API
- Types API
- Expression API
- Spark API
- API Reference
- Known Python Issues
- R
- Rust
- Swift
- Wasm
- ADBC
- ODBC
- SQL
- Introduction
- Statements
- Overview
- ALTER TABLE
- ALTER VIEW
- ATTACH/DETACH
- CALL
- CHECKPOINT
- COPY
- CREATE INDEX
- CREATE MACRO
- CREATE SCHEMA
- CREATE SEQUENCE
- CREATE TABLE
- CREATE VIEW
- CREATE TYPE
- DELETE
- DROP
- EXPORT/IMPORT DATABASE
- INSERT
- PIVOT
- Profiling
- SELECT
- SET/RESET
- Transaction Management
- UNPIVOT
- UPDATE
- USE
- VACUUM
- Query Syntax
- SELECT
- FROM & JOIN
- WHERE
- GROUP BY
- GROUPING SETS
- HAVING
- ORDER BY
- LIMIT
- SAMPLE
- Unnesting
- WITH
- WINDOW
- QUALIFY
- VALUES
- FILTER
- Set Operations
- Prepared Statements
- Data Types
- Overview
- Array
- Bitstring
- Blob
- Boolean
- Date
- Enum
- Interval
- List
- Map
- NULL Values
- Numeric
- Struct
- Text
- Time
- Timestamp
- Time Zones
- Union
- Typecasting
- Expressions
- Overview
- CASE Statement
- Casting
- Collations
- Comparisons
- IN Operator
- Logical Operators
- Star Expression
- Subqueries
- Functions
- Overview
- Bitstring Functions
- Blob Functions
- Date Format Functions
- Date Functions
- Date Part Functions
- Enum Functions
- Interval Functions
- Nested Functions
- Numeric Functions
- Pattern Matching
- Text Functions
- Time Functions
- Timestamp Functions
- Timestamp with Time Zone Functions
- Utility Functions
- Aggregate Functions
- Configuration
- Constraints
- Indexes
- Information Schema
- Metadata Functions
- Pragmas
- Rules for Case Sensitivity
- Samples
- Window Functions
- Extensions
- Development
- DuckDB Repositories
- Testing
- Internals Overview
- Storage Versions & Format
- Execution Format
- Profiling
- Release Dates
- Building
- Benchmark Suite
- Sitemap
- Why DuckDB
- Media
- FAQ
- Code of Conduct
- Live Demo
The tpch
extension implements the data generator and queries for the TPC-H benchmark.
Installing and Loading
The tpch
extension is shipped by default in some DuckDB builds, otherwise it will be transparently autoloaded on first use.
If you would like to install and load it manually, run:
INSTALL tpch;
LOAD tpch;
Usage
Generating Data
To generate data for scale factor 1, use:
CALL dbgen(sf = 1);
Calling dbgen
does not clean up existing TPC-H tables.
To clean up existing tables, use DROP TABLE
before running dbgen
:
DROP TABLE IF EXISTS customer;
DROP TABLE IF EXISTS lineitem;
DROP TABLE IF EXISTS nation;
DROP TABLE IF EXISTS orders;
DROP TABLE IF EXISTS part;
DROP TABLE IF EXISTS partsupp;
DROP TABLE IF EXISTS region;
DROP TABLE IF EXISTS supplier;
Running a Query
To run a query, e.g., query 4, use:
PRAGMA tpch(4);
┌─────────────────┬─────────────┐
│ o_orderpriority │ order_count │
│ varchar │ int64 │
├─────────────────┼─────────────┤
│ 1-URGENT │ 21188 │
│ 2-HIGH │ 20952 │
│ 3-MEDIUM │ 20820 │
│ 4-NOT SPECIFIED │ 21112 │
│ 5-LOW │ 20974 │
└─────────────────┴─────────────┘
Listing Queries
To list all 22 queries, run:
FROM tpch_queries();
This function returns a table with columns query_nr
and query
.
Listing Expected Answers
To produced the expected results for all queries on scale factors 0.01, 0.1, and 1, run:
FROM tpch_answers();
This function returns a table with columns query_nr
, scale_factor
, and answer
.
Data Generator Parameters
The data generator function dbgen
has the following parameters:
Name | Type | Description |
---|---|---|
catalog |
VARCHAR |
Target catalog |
children |
UINTEGER |
Number of partitions |
overwrite |
BOOLEAN |
(Not used) |
sf |
DOUBLE |
Scale factor |
step |
UINTEGER |
Defines the partition to be generated, indexed from 0 to children - 1. Must be defined when the children arguments is defined |
suffix |
VARCHAR |
Append the suffix to table names |
Generating Larger Than Memory Data Sets Scale Factors
To generate larger than memory data sets, run the dbgen
function in steps. For example:
CALL dbgen(sf = 1000, children = 100, step = 0);
CALL dbgen(sf = 1000, children = 100, step = 1);
...
CALL dbgen(sf = 1000, children = 100, step = 99);
Limitations
- The data generator function
dbgen
is single-threaded and does not support concurrency. Running multiple steps to parallelize over different partitions is also not supported at the moment. - The
tpch({query_id})
function runs a fixed TPC-H query with pre-defined bind parameters (a.k.a. substitution parameters). It is not possible to change the query parameters using thetpch
extension.