- Installation
- Documentation
- Getting Started
- Connect
- Data Import
- Overview
- Data Sources
- CSV Files
- JSON Files
- Overview
- Creating JSON
- Loading JSON
- Writing JSON
- JSON Type
- JSON Functions
- Format Settings
- Installing and Loading
- SQL to / from JSON
- Caveats
- Multiple Files
- Parquet Files
- Partitioning
- Appender
- INSERT Statements
- Client APIs
- Overview
- C
- Overview
- Startup
- Configuration
- Query
- Data Chunks
- Vectors
- Values
- Types
- Prepared Statements
- Appender
- Table Functions
- Replacement Scans
- API Reference
- C++
- CLI
- Dart
- Go
- Java
- Julia
- Node.js
- Python
- Overview
- Data Ingestion
- Conversion between DuckDB and Python
- DB API
- Relational API
- Function API
- Types API
- Expression API
- Spark API
- API Reference
- Known Python Issues
- R
- Rust
- Swift
- Wasm
- ADBC
- ODBC
- SQL
- Introduction
- Statements
- Overview
- ANALYZE
- ALTER TABLE
- ALTER VIEW
- ATTACH and DETACH
- CALL
- CHECKPOINT
- COMMENT ON
- COPY
- CREATE INDEX
- CREATE MACRO
- CREATE SCHEMA
- CREATE SECRET
- CREATE SEQUENCE
- CREATE TABLE
- CREATE VIEW
- CREATE TYPE
- DELETE
- DESCRIBE
- DROP
- EXPORT and IMPORT DATABASE
- INSERT
- PIVOT
- Profiling
- SELECT
- SET / RESET
- SET VARIABLE
- SUMMARIZE
- Transaction Management
- UNPIVOT
- UPDATE
- USE
- VACUUM
- LOAD / INSTALL
- Query Syntax
- SELECT
- FROM and JOIN
- WHERE
- GROUP BY
- GROUPING SETS
- HAVING
- ORDER BY
- LIMIT and OFFSET
- SAMPLE
- Unnesting
- WITH
- WINDOW
- QUALIFY
- VALUES
- FILTER
- Set Operations
- Prepared Statements
- Data Types
- Overview
- Array
- Bitstring
- Blob
- Boolean
- Date
- Enum
- Interval
- List
- Literal Types
- Map
- NULL Values
- Numeric
- Struct
- Text
- Time
- Timestamp
- Time Zones
- Union
- Typecasting
- Expressions
- Overview
- CASE Statement
- Casting
- Collations
- Comparisons
- IN Operator
- Logical Operators
- Star Expression
- Subqueries
- Functions
- Overview
- Aggregate Functions
- Array Functions
- Bitstring Functions
- Blob Functions
- Date Format Functions
- Date Functions
- Date Part Functions
- Enum Functions
- Interval Functions
- Lambda Functions
- List Functions
- Map Functions
- Nested Functions
- Numeric Functions
- Pattern Matching
- Regular Expressions
- Struct Functions
- Text Functions
- Time Functions
- Timestamp Functions
- Timestamp with Time Zone Functions
- Union Functions
- Utility Functions
- Window Functions
- Constraints
- Indexes
- Meta Queries
- DuckDB's SQL Dialect
- Samples
- Configuration
- Extensions
- Overview
- Core Extensions
- Community Extensions
- Working with Extensions
- Versioning of Extensions
- Arrow
- AutoComplete
- AWS
- Azure
- Delta
- Excel
- Full Text Search
- httpfs (HTTP and S3)
- Iceberg
- ICU
- inet
- jemalloc
- MySQL
- PostgreSQL
- Spatial
- SQLite
- Substrait
- TPC-DS
- TPC-H
- VSS
- Guides
- Overview
- Data Viewers
- Database Integration
- File Formats
- Overview
- CSV Import
- CSV Export
- Directly Reading Files
- Excel Import
- Excel Export
- JSON Import
- JSON Export
- Parquet Import
- Parquet Export
- Querying Parquet Files
- Network and Cloud Storage
- Overview
- HTTP Parquet Import
- S3 Parquet Import
- S3 Parquet Export
- S3 Iceberg Import
- S3 Express One
- GCS Import
- Cloudflare R2 Import
- DuckDB over HTTPS / S3
- Meta Queries
- Describe Table
- EXPLAIN: Inspect Query Plans
- EXPLAIN ANALYZE: Profile Queries
- List Tables
- Summarize
- DuckDB Environment
- ODBC
- Performance
- Overview
- Environment
- Import
- Schema
- Indexing
- Join Operations
- File Formats
- How to Tune Workloads
- My Workload Is Slow
- Benchmarks
- Python
- Installation
- Executing SQL
- Jupyter Notebooks
- SQL on Pandas
- Import from Pandas
- Export to Pandas
- Import from Numpy
- Export to Numpy
- SQL on Arrow
- Import from Arrow
- Export to Arrow
- Relational API on Pandas
- Multiple Python Threads
- Integration with Ibis
- Integration with Polars
- Using fsspec Filesystems
- SQL Editors
- SQL Features
- Snippets
- Glossary of Terms
- Browse Offline
- Operations Manual
- Overview
- Limits
- Non-Deterministic Behavior
- Embedding DuckDB
- DuckDB's Footprint
- Securing DuckDB
- Development
- DuckDB Repositories
- Testing
- Overview
- sqllogictest Introduction
- Writing Tests
- Debugging
- Result Verification
- Persistent Testing
- Loops
- Multiple Connections
- Catch
- Profiling
- Release Calendar
- Building
- Benchmark Suite
- Internals
- Sitemap
- Why DuckDB
- Media
- FAQ
- Code of Conduct
- Live Demo
The Secrets manager provides a unified user interface for secrets across all backends that use them. Secrets can be scoped, so different storage prefixes can have different secrets, allowing for example to join data across organizations in a single query. Secrets can also be persisted, so that they do not need to be specified every time DuckDB is launched.
Warning Persistent secrets are stored in unencrypted binary format on the disk.
Secrets
Types of Secrets
Secrets are typed, their type identifies which service they are for. Currently, the following cloud services are available:
- AWS S3 (
S3
), through thehttpfs
extension - Azure Blob Storage (
AZURE
), through theazure
extension - Cloudflare R2 (
R2
), through thehttpfs
extension - Google Cloud Storage (
GCS
), through thehttpfs
extension - Hugging Face (
HUGGINGFACE
), through thehttpfs
extension - MySQL (
MYSQL
), through themysql
extension - PostgreSQL (
POSTGRES
), through thepostgres
extension
For each type, there are one or more “secret providers” that specify how the secret is created. Secrets can also have an optional scope, which is a file path prefix that the secret applies to. When fetching a secret for a path, the secret scopes are compared to the path, returning the matching secret for the path. In the case of multiple matching secrets, the longest prefix is chosen.
Creating a Secret
Secrets can be created using the CREATE SECRET
SQL statement.
Secrets can be temporary or persistent. Temporary secrets are used by default – and are stored in-memory for the life span of the DuckDB instance similar to how settings worked previously. Persistent secrets are stored in unencrypted binary format in the ~/.duckdb/stored_secrets
directory. On startup of DuckDB, persistent secrets are read from this directory and automatically loaded.
Secret Providers
To create a secret, a Secret Provider needs to be used. A Secret Provider is a mechanism through which a secret is generated. To illustrate this, for the S3
, GCS
, R2
, and AZURE
secret types, DuckDB currently supports two providers: CONFIG
and CREDENTIAL_CHAIN
. The CONFIG
provider requires the user to pass all configuration information into the CREATE SECRET
, whereas the CREDENTIAL_CHAIN
provider will automatically try to fetch credentials. When no Secret Provider is specified, the CONFIG
provider is used. For more details on how to create secrets using different providers check out the respective pages on httpfs and azure.
Temporary Secrets
To create a temporary unscoped secret to access S3, we can now use the following:
CREATE SECRET my_secret (
TYPE S3,
KEY_ID 'my_secret_key',
SECRET 'my_secret_value',
REGION 'my_region'
);
Note that we implicitly use the default CONFIG
secret provider here.
Persistent Secrets
In order to persist secrets between DuckDB database instances, we can now use the CREATE PERSISTENT SECRET
command, e.g.:
CREATE PERSISTENT SECRET my_persistent_secret (
TYPE S3,
KEY_ID 'my_secret_key',
SECRET 'my_secret_value'
);
By default, this will write the secret (unencrypted) to the ~/.duckdb/stored_secrets
directory. To change the secrets directory, issue:
SET secret_directory = 'path/to/my_secrets_dir';
Note that setting the value of the home_directory
configuration option has no effect on the location of the secrets.
Deleting Secrets
Secrets can be deleted using the DROP SECRET
statement, e.g.:
DROP PERSISTENT SECRET my_persistent_secret;
Creating Multiple Secrets for the Same Service Type
If two secrets exist for a service type, the scope can be used to decide which one should be used. For example:
CREATE SECRET secret1 (
TYPE S3,
KEY_ID 'my_secret_key1',
SECRET 'my_secret_value1',
SCOPE 's3://my-bucket'
);
CREATE SECRET secret2 (
TYPE S3,
KEY_ID 'my_secret_key2',
SECRET 'my_secret_value2',
SCOPE 's3://my-other-bucket'
);
Now, if the user queries something from s3://my-other-bucket/something
, secret secret2
will be chosen automatically for that request. To see which secret is being used, the which_secret
scalar function can be used, which takes a path and a secret type as parameters:
FROM which_secret('s3://my-other-bucket/file.parquet', 's3');
Listing Secrets
Secrets can be listed using the built-in table-producing function, e.g., by using the duckdb_secrets()
table function:
FROM duckdb_secrets();
Sensitive information will be redacted.