- Installation
- Documentation
- Getting Started
- Connect
- Data Import
- Overview
- Data Sources
- CSV Files
- JSON Files
- Overview
- Creating JSON
- Loading JSON
- Writing JSON
- JSON Type
- JSON Functions
- Format Settings
- Installing and Loading
- SQL to / from JSON
- Caveats
- Multiple Files
- Parquet Files
- Partitioning
- Appender
- INSERT Statements
- Client APIs
- Overview
- ADBC
- C
- Overview
- Startup
- Configuration
- Query
- Data Chunks
- Vectors
- Values
- Types
- Prepared Statements
- Appender
- Table Functions
- Replacement Scans
- API Reference
- C++
- CLI
- Dart
- Go
- Java (JDBC)
- Julia
- Node.js (Deprecated)
- Node.js (Neo)
- ODBC
- Python
- Overview
- Data Ingestion
- Conversion between DuckDB and Python
- DB API
- Relational API
- Function API
- Types API
- Expression API
- Spark API
- API Reference
- Known Python Issues
- R
- Rust
- Swift
- Wasm
- SQL
- Introduction
- Statements
- Overview
- ANALYZE
- ALTER TABLE
- ALTER VIEW
- ATTACH and DETACH
- CALL
- CHECKPOINT
- COMMENT ON
- COPY
- CREATE INDEX
- CREATE MACRO
- CREATE SCHEMA
- CREATE SECRET
- CREATE SEQUENCE
- CREATE TABLE
- CREATE VIEW
- CREATE TYPE
- DELETE
- DESCRIBE
- DROP
- EXPORT and IMPORT DATABASE
- INSERT
- LOAD / INSTALL
- PIVOT
- Profiling
- SELECT
- SET / RESET
- SET VARIABLE
- SUMMARIZE
- Transaction Management
- UNPIVOT
- UPDATE
- USE
- VACUUM
- Query Syntax
- SELECT
- FROM and JOIN
- WHERE
- GROUP BY
- GROUPING SETS
- HAVING
- ORDER BY
- LIMIT and OFFSET
- SAMPLE
- Unnesting
- WITH
- WINDOW
- QUALIFY
- VALUES
- FILTER
- Set Operations
- Prepared Statements
- Data Types
- Overview
- Array
- Bitstring
- Blob
- Boolean
- Date
- Enum
- Interval
- List
- Literal Types
- Map
- NULL Values
- Numeric
- Struct
- Text
- Time
- Timestamp
- Time Zones
- Union
- Typecasting
- Expressions
- Overview
- CASE Expression
- Casting
- Collations
- Comparisons
- IN Operator
- Logical Operators
- Star Expression
- Subqueries
- TRY
- Functions
- Overview
- Aggregate Functions
- Array Functions
- Bitstring Functions
- Blob Functions
- Date Format Functions
- Date Functions
- Date Part Functions
- Enum Functions
- Interval Functions
- Lambda Functions
- List Functions
- Map Functions
- Nested Functions
- Numeric Functions
- Pattern Matching
- Regular Expressions
- Struct Functions
- Text Functions
- Time Functions
- Timestamp Functions
- Timestamp with Time Zone Functions
- Union Functions
- Utility Functions
- Window Functions
- Constraints
- Indexes
- Meta Queries
- DuckDB's SQL Dialect
- Overview
- Indexing
- Friendly SQL
- Keywords and Identifiers
- Order Preservation
- PostgreSQL Compatibility
- SQL Quirks
- Samples
- Configuration
- Extensions
- Overview
- Installing Extensions
- Advanced Installation Methods
- Distributing Extensions
- Versioning of Extensions
- Troubleshooting of Extensions
- Core Extensions
- Overview
- AutoComplete
- Avro
- AWS
- Azure
- Delta
- Encodings
- Excel
- Full Text Search
- httpfs (HTTP and S3)
- Iceberg
- Overview
- Iceberg REST Catalogs
- Amazon S3 Tables
- Amazon SageMaker Lakehouse (AWS Glue)
- Troubleshooting
- ICU
- inet
- jemalloc
- MySQL
- PostgreSQL
- Spatial
- SQLite
- TPC-DS
- TPC-H
- UI
- VSS
- Guides
- Overview
- Data Viewers
- Database Integration
- File Formats
- Overview
- CSV Import
- CSV Export
- Directly Reading Files
- Excel Import
- Excel Export
- JSON Import
- JSON Export
- Parquet Import
- Parquet Export
- Querying Parquet Files
- File Access with the file: Protocol
- Network and Cloud Storage
- Overview
- HTTP Parquet Import
- S3 Parquet Import
- S3 Parquet Export
- S3 Iceberg Import
- S3 Express One
- GCS Import
- Cloudflare R2 Import
- DuckDB over HTTPS / S3
- Fastly Object Storage Import
- Meta Queries
- Describe Table
- EXPLAIN: Inspect Query Plans
- EXPLAIN ANALYZE: Profile Queries
- List Tables
- Summarize
- DuckDB Environment
- ODBC
- Performance
- Overview
- Environment
- Import
- Schema
- Indexing
- Join Operations
- File Formats
- How to Tune Workloads
- My Workload Is Slow
- Benchmarks
- Working with Huge Databases
- Python
- Installation
- Executing SQL
- Jupyter Notebooks
- marimo Notebooks
- SQL on Pandas
- Import from Pandas
- Export to Pandas
- Import from Numpy
- Export to Numpy
- SQL on Arrow
- Import from Arrow
- Export to Arrow
- Relational API on Pandas
- Multiple Python Threads
- Integration with Ibis
- Integration with Polars
- Using fsspec Filesystems
- SQL Editors
- SQL Features
- Snippets
- Creating Synthetic Data
- Dutch Railway Datasets
- Sharing Macros
- Analyzing a Git Repository
- Importing Duckbox Tables
- Copying an In-Memory Database to a File
- Troubleshooting
- Glossary of Terms
- Browsing Offline
- Operations Manual
- Overview
- DuckDB's Footprint
- Logging
- Securing DuckDB
- Non-Deterministic Behavior
- Limits
- Development
- DuckDB Repositories
- Profiling
- Building DuckDB
- Overview
- Build Configuration
- Building Extensions
- Android
- Linux
- macOS
- Raspberry Pi
- Windows
- Python
- R
- Troubleshooting
- Unofficial and Unsupported Platforms
- Benchmark Suite
- Testing
- Internals
- Why DuckDB
- FAQ
- Code of Conduct
- Release Calendar
- Roadmap
- Sitemap
- Live Demo
The iceberg
extension supports attaching Iceberg REST Catalogs. Before attaching an Iceberg REST Catalog, you must install the iceberg
extension by following the instructions located in the overview.
If you are attaching to an Iceberg REST Catalog managed by Amazon, please see the instructions for attaching to Amazon S3 tables or Amazon Sagemaker Lakehouse.
For all other Iceberg REST Catalogs, you can follow the instructions below. Please see the Examples section for questions about specific catalogs.
Most Iceberg REST Catalogs authenticate via OAuth2. You can use the existing DuckDB secret workflow to store login credentials for the OAuth2 service.
CREATE SECRET iceberg_secret (
TYPE ICEBERG,
CLIENT_ID 'admin',
CLIENT_SECRET 'password',
OAUTH2_SERVER_URI 'http://irc_host_url.com/v1/oauth/tokens'
);
If you already have a Bearer token, you can pass it directly to your CREATE SECRET
statement
CREATE SECRET iceberg_secret (
TYPE ICEBERG,
TOKEN 'bearer_token'
);
You can attach the Iceberg catalog with the following ATTACH
statement.
ATTACH 'warehouse' AS iceberg_catalog (
TYPE iceberg,
SECRET iceberg_secret, -- pass a specific secret name to prevent ambiguity
ENDPOINT https://rest_endpoint.com
);
To see the available tables run
SHOW ALL TABLES;
ATTACH OPTIONS
A REST Catalog with OAuth2 authorization can also be attached with just an ATTACH
statement. See the complete list of ATTACH
options for a REST catalog below.
Parameter | Type | Default | Description |
---|---|---|---|
ENDPOINT_TYPE |
VARCHAR |
NULL |
Used for attaching S3Tables or Glue catalogs. Allowed values are 'GLUE' and 'S3_TABLES' |
ENDPOINT |
VARCHAR |
NULL |
URL endpoint to communicate with the REST Catalog. Cannot be used in conjunction with ENDPOINT_TYPE |
SECRET |
VARCHAR |
NULL |
Name of secret used to communicate with the REST Catalog |
CLIENT_ID |
VARCHAR |
NULL |
CLIENT_ID used for Secret |
CLIENT_SECRET |
VARCHAR |
NULL |
CLIENT_SECRET needed for Secret |
DEFAULT_REGION |
VARCHAR |
NULL |
A Default region to use when communicating with the storage layer |
OAUTH2_SERVER_URI |
VARCHAR |
NULL |
OAuth2 server url for getting a Bearer Token |
AUTHORIZATION_TYPE |
VARCHAR |
OAUTH2 |
Pass SigV4 for Catalogs the require SigV4 authorization |
The following options can only be passed to a CREATE SECRET
statement, and they require AUTHORIZATION_TYPE
to be OAUTH2
Parameter | Type | Default | Description |
---|---|---|---|
OAUTH2_GRANT_TYPE |
VARCHAR |
NULL |
Grant Type when requesting an OAuth Token |
OAUTH2_SCOPE |
VARCHAR |
NULL |
Requested scope for the returned OAuth Access Token |
Specific Catalog Examples
R2 Catalog
To attach to an R2 cloudflare managed catalog follow the attach steps below.
CREATE SECRET r2_secret (
TYPE ICEBERG,
TOKEN 'r2_token'
);
You can create a token by following the create an API token steps in getting started.
Then, attach the catalog with the following commands.
ATTACH 'warehouse' AS my_r2_catalog (
TYPE ICEBERG,
ENDPOINT 'catalog-uri'
);
The variables for warehouse
and catalog-uri
will be available under the settings of the desired R2 Object Storage Catalog (R2 Object Store > Catalog name > Settings).
Polaris
To attach to a Polaris catalog the following commands will work.
CREATE SECRET polaris_secret (
TYPE ICEBERG,
CLIENT_ID 'admin',
CLIENT_SECRET 'password',
);
ATTACH 'quickstart_catalog' as polaris_catalog (
TYPE ICEBERG,
ENDPOINT 'polaris_rest_catalog_endpoint'
);
Lakekeeper
To attach to a Lakekeeper catalog the following commands will work.
CREATE SECRET lakekeeper_secret (
TYPE ICEBERG,
CLIENT_ID 'admin',
CLIENT_SECRET 'password',
OAUTH2_SCOPE 'scope',
OAUTH2_SERVER_URI 'lakekeeper_oauth_url'
);
ATTACH 'warehouse' as lakekeeper_catalog (
TYPE ICEBERG,
ENDPOINT 'lakekeeper_irc_url',
SECRET lakekeeper_secret
);
Limitations
Reading from Iceberg REST Catalogs backed by remote storage that is not S3 or S3Tables is not yet supported.