- Installation
- Documentation
- Getting Started
- Connect
- Data Import
- Overview
- Data Sources
- CSV Files
- JSON Files
- Overview
- Creating JSON
- Loading JSON
- Writing JSON
- JSON Type
- JSON Functions
- Format Settings
- Installing and Loading
- SQL to / from JSON
- Caveats
- Multiple Files
- Parquet Files
- Partitioning
- Appender
- INSERT Statements
- Client APIs
- Overview
- ADBC
- C
- Overview
- Startup
- Configuration
- Query
- Data Chunks
- Vectors
- Values
- Types
- Prepared Statements
- Appender
- Table Functions
- Replacement Scans
- API Reference
- C++
- CLI
- Dart
- Go
- Java (JDBC)
- Julia
- Node.js (deprecated)
- Node.js (Neo)
- ODBC
- Python
- Overview
- Data Ingestion
- Conversion between DuckDB and Python
- DB API
- Relational API
- Function API
- Types API
- Expression API
- Spark API
- API Reference
- Known Python Issues
- R
- Rust
- Swift
- Wasm
- SQL
- Introduction
- Statements
- Overview
- ANALYZE
- ALTER TABLE
- ALTER VIEW
- ATTACH and DETACH
- CALL
- CHECKPOINT
- COMMENT ON
- COPY
- CREATE INDEX
- CREATE MACRO
- CREATE SCHEMA
- CREATE SECRET
- CREATE SEQUENCE
- CREATE TABLE
- CREATE VIEW
- CREATE TYPE
- DELETE
- DESCRIBE
- DROP
- EXPORT and IMPORT DATABASE
- INSERT
- LOAD / INSTALL
- PIVOT
- Profiling
- SELECT
- SET / RESET
- SET VARIABLE
- SUMMARIZE
- Transaction Management
- UNPIVOT
- UPDATE
- USE
- VACUUM
- Query Syntax
- SELECT
- FROM and JOIN
- WHERE
- GROUP BY
- GROUPING SETS
- HAVING
- ORDER BY
- LIMIT and OFFSET
- SAMPLE
- Unnesting
- WITH
- WINDOW
- QUALIFY
- VALUES
- FILTER
- Set Operations
- Prepared Statements
- Data Types
- Overview
- Array
- Bitstring
- Blob
- Boolean
- Date
- Enum
- Interval
- List
- Literal Types
- Map
- NULL Values
- Numeric
- Struct
- Text
- Time
- Timestamp
- Time Zones
- Union
- Typecasting
- Expressions
- Overview
- CASE Statement
- Casting
- Collations
- Comparisons
- IN Operator
- Logical Operators
- Star Expression
- Subqueries
- Functions
- Overview
- Aggregate Functions
- Array Functions
- Bitstring Functions
- Blob Functions
- Date Format Functions
- Date Functions
- Date Part Functions
- Enum Functions
- Interval Functions
- Lambda Functions
- List Functions
- Map Functions
- Nested Functions
- Numeric Functions
- Pattern Matching
- Regular Expressions
- Struct Functions
- Text Functions
- Time Functions
- Timestamp Functions
- Timestamp with Time Zone Functions
- Union Functions
- Utility Functions
- Window Functions
- Constraints
- Indexes
- Meta Queries
- DuckDB's SQL Dialect
- Overview
- Indexing
- Friendly SQL
- Keywords and Identifiers
- Order Preservation
- PostgreSQL Compatibility
- SQL Quirks
- Samples
- Configuration
- Extensions
- Overview
- Core Extensions
- Community Extensions
- Installing Extensions
- Advanced Installation Methods
- Distributing Extensions
- Versioning of Extensions
- Arrow
- AutoComplete
- AWS
- Azure
- Delta
- Excel
- Full Text Search
- httpfs (HTTP and S3)
- Iceberg
- ICU
- inet
- jemalloc
- MySQL
- PostgreSQL
- Spatial
- SQLite
- TPC-DS
- TPC-H
- UI
- VSS
- Guides
- Overview
- Data Viewers
- Database Integration
- File Formats
- Overview
- CSV Import
- CSV Export
- Directly Reading Files
- Excel Import
- Excel Export
- JSON Import
- JSON Export
- Parquet Import
- Parquet Export
- Querying Parquet Files
- File Access with the file: Protocol
- Network and Cloud Storage
- Overview
- HTTP Parquet Import
- S3 Parquet Import
- S3 Parquet Export
- S3 Iceberg Import
- S3 Express One
- GCS Import
- Cloudflare R2 Import
- DuckDB over HTTPS / S3
- Fastly Object Storage Import
- Meta Queries
- Describe Table
- EXPLAIN: Inspect Query Plans
- EXPLAIN ANALYZE: Profile Queries
- List Tables
- Summarize
- DuckDB Environment
- ODBC
- Performance
- Overview
- Environment
- Import
- Schema
- Indexing
- Join Operations
- File Formats
- How to Tune Workloads
- My Workload Is Slow
- Benchmarks
- Python
- Installation
- Executing SQL
- Jupyter Notebooks
- SQL on Pandas
- Import from Pandas
- Export to Pandas
- Import from Numpy
- Export to Numpy
- SQL on Arrow
- Import from Arrow
- Export to Arrow
- Relational API on Pandas
- Multiple Python Threads
- Integration with Ibis
- Integration with Polars
- Using fsspec Filesystems
- SQL Editors
- SQL Features
- Snippets
- Creating Synthetic Data
- Sharing Macros
- Analyzing a Git Repository
- Importing Duckbox Tables
- Copying an In-Memory Database to a File
- Glossary of Terms
- Browsing Offline
- Operations Manual
- Overview
- DuckDB's Footprint
- Securing DuckDB
- Non-Deterministic Behavior
- Limits
- Development
- DuckDB Repositories
- Testing
- Overview
- sqllogictest Introduction
- Writing Tests
- Debugging
- Result Verification
- Persistent Testing
- Loops
- Multiple Connections
- Catch
- Profiling
- Release Calendar
- Building DuckDB
- Overview
- Build Configuration
- Building Extensions
- Android
- Linux
- macOS
- Raspberry Pi
- Windows
- Python
- R
- Troubleshooting
- Unofficial and Unsupported Platforms
- Benchmark Suite
- Internals
- Why DuckDB
- FAQ
- Code of Conduct
- Sitemap
- Live Demo
Platforms
Extension binaries are distributed for several platforms (see below). For platforms where packages for certain extensions are not available, users can build them from source and install the resulting binaries manually.
All official extensions are distributed for the following platforms.
Platform name | Operating system | Architecture | CPU types | Used by |
---|---|---|---|---|
linux_amd64 |
Linux | x86_64 (AMD64) | Node.js packages, etc. | |
linux_amd64_gcc4 |
Linux | x86_64 (AMD64) | Python packages, CLI, etc. | |
linux_arm64 |
Linux | AArch64 (ARM64) | AWS Graviton, Snapdragon, etc. | All packages |
osx_amd64 |
macOS | x86_64 (AMD64) | Intel | All packages |
osx_arm64 |
macOS | AArch64 (ARM64) | Apple Silicon M1, M2, etc. | All packages |
windows_amd64 |
Windows | x86_64 (AMD64) | Intel, AMD, etc. | All packages |
For some Linux ARM distributions (e.g., Python), two different binaries are distributed. These target either the
linux_arm64
orlinux_arm64_gcc4
platforms. Note that extension binaries are distributed for the first, but not the second. Effectively that means that on these platforms your glibc version needs to be 2.28 or higher to use the distributed extension binaries.
Some extensions are distributed for the following platforms:
windows_amd64_mingw
wasm_eh
andwasm_mvp
(see DuckDB-Wasm's extensions)
For platforms outside the ones listed above, we do not officially distribute extensions (e.g., linux_arm64_android
, linux_arm64_gcc4
).
Extensions Signing
Signed Extensions
Extensions can be signed with a cryptographic key. By default, DuckDB uses its built-in public keys to verify the integrity of extension before loading them. All core and community extensions are signed by the DuckDB team.
Signing the extension simplifies their distribution, this is why they can be distributed over HTTP without the need for HTTPS,
which is itself is supported through an extension (httpfs
).
Unsigned Extensions
Warning Only load unsigned extensions from sources you trust. Avoid loading unsigned extensions over HTTP. Consult the Securing DuckDB page for guidelines on how set up DuckDB in a secure manner.
If you wish to load your own extensions or extensions from third-parties you will need to enable the allow_unsigned_extensions
flag.
To load unsigned extensions using the CLI client, pass the -unsigned
flag to it on startup:
duckdb -unsigned
Now any extension can be loaded, signed or not:
LOAD './some/local/ext.duckdb_extension';
For client APIs, the allow_unsigned_extensions
database configuration options needs to be set, see the respective Client API docs.
For example, for the Python client, see the Loading and Installing Extensions section in the Python API documentation.
Binary Compatibility
To avoid binary compatibility issues, the binary extensions distributed by DuckDB are tied both to a specific DuckDB version and a platform. This means that DuckDB can automatically detect binary compatibility between it and a loadable extension. When trying to load an extension that was compiled for a different version or platform, DuckDB will throw an error and refuse to load the extension.
Creating a Custom Repository
You can create custom DuckDB extension repository. A DuckDB repository is an HTTP, HTTPS, S3, or local file based directory that serves the extensions files in a specific structure. This structure is described in the “Downloading Extensions Directly from S3” section, and is the same for local paths and remote servers, for example:
base_repository_path_or_url
└── v1.0.0
└── osx_arm64
├── autocomplete.duckdb_extension
├── httpfs.duckdb_extension
├── icu.duckdb_extension
├── inet.duckdb_extension
├── json.duckdb_extension
├── parquet.duckdb_extension
├── tpcds.duckdb_extension
├── tpcds.duckdb_extension
└── tpch.duckdb_extension
See the extension-template
repository for all necessary code and scripts
to set up a repository.
When installing an extension from a custom repository, DuckDB will search for both a gzipped and non-gzipped version. For example:
INSTALL icu FROM '⟨custom repository⟩';
The execution of this statement will first look icu.duckdb_extension.gz
, then icu.duckdb_extension
in the repository's directory structure.
If the custom repository is served over HTTPS or S3, the httpfs
extension is required. DuckDB will attempt to autoload
the httpfs
extension when an installation over HTTPS or S3 is attempted.