- Installation
- Documentation
- Getting Started
- Connect
- Data Import
- Overview
- Data Sources
- CSV Files
- JSON Files
- Overview
- Creating JSON
- Loading JSON
- Writing JSON
- JSON Type
- JSON Functions
- Format Settings
- Installing and Loading
- SQL to / from JSON
- Caveats
- Multiple Files
- Parquet Files
- Partitioning
- Appender
- INSERT Statements
- Client APIs
- Overview
- ADBC
- C
- Overview
- Startup
- Configuration
- Query
- Data Chunks
- Vectors
- Values
- Types
- Prepared Statements
- Appender
- Table Functions
- Replacement Scans
- API Reference
- C++
- CLI
- Dart
- Go
- Java (JDBC)
- Julia
- Node.js (deprecated)
- Node.js (Neo)
- ODBC
- Python
- Overview
- Data Ingestion
- Conversion between DuckDB and Python
- DB API
- Relational API
- Function API
- Types API
- Expression API
- Spark API
- API Reference
- Known Python Issues
- R
- Rust
- Swift
- Wasm
- SQL
- Introduction
- Statements
- Overview
- ANALYZE
- ALTER TABLE
- ALTER VIEW
- ATTACH and DETACH
- CALL
- CHECKPOINT
- COMMENT ON
- COPY
- CREATE INDEX
- CREATE MACRO
- CREATE SCHEMA
- CREATE SECRET
- CREATE SEQUENCE
- CREATE TABLE
- CREATE VIEW
- CREATE TYPE
- DELETE
- DESCRIBE
- DROP
- EXPORT and IMPORT DATABASE
- INSERT
- LOAD / INSTALL
- PIVOT
- Profiling
- SELECT
- SET / RESET
- SET VARIABLE
- SUMMARIZE
- Transaction Management
- UNPIVOT
- UPDATE
- USE
- VACUUM
- Query Syntax
- SELECT
- FROM and JOIN
- WHERE
- GROUP BY
- GROUPING SETS
- HAVING
- ORDER BY
- LIMIT and OFFSET
- SAMPLE
- Unnesting
- WITH
- WINDOW
- QUALIFY
- VALUES
- FILTER
- Set Operations
- Prepared Statements
- Data Types
- Overview
- Array
- Bitstring
- Blob
- Boolean
- Date
- Enum
- Interval
- List
- Literal Types
- Map
- NULL Values
- Numeric
- Struct
- Text
- Time
- Timestamp
- Time Zones
- Union
- Typecasting
- Expressions
- Overview
- CASE Expression
- Casting
- Collations
- Comparisons
- IN Operator
- Logical Operators
- Star Expression
- Subqueries
- Functions
- Overview
- Aggregate Functions
- Array Functions
- Bitstring Functions
- Blob Functions
- Date Format Functions
- Date Functions
- Date Part Functions
- Enum Functions
- Interval Functions
- Lambda Functions
- List Functions
- Map Functions
- Nested Functions
- Numeric Functions
- Pattern Matching
- Regular Expressions
- Struct Functions
- Text Functions
- Time Functions
- Timestamp Functions
- Timestamp with Time Zone Functions
- Union Functions
- Utility Functions
- Window Functions
- Constraints
- Indexes
- Meta Queries
- DuckDB's SQL Dialect
- Overview
- Indexing
- Friendly SQL
- Keywords and Identifiers
- Order Preservation
- PostgreSQL Compatibility
- SQL Quirks
- Samples
- Configuration
- Extensions
- Overview
- Core Extensions
- Community Extensions
- Installing Extensions
- Advanced Installation Methods
- Distributing Extensions
- Versioning of Extensions
- Arrow
- AutoComplete
- Avro
- AWS
- Azure
- Delta
- Excel
- Full Text Search
- httpfs (HTTP and S3)
- Iceberg
- ICU
- inet
- jemalloc
- MySQL
- PostgreSQL
- Spatial
- SQLite
- TPC-DS
- TPC-H
- UI
- VSS
- Guides
- Overview
- Data Viewers
- Database Integration
- File Formats
- Overview
- CSV Import
- CSV Export
- Directly Reading Files
- Excel Import
- Excel Export
- JSON Import
- JSON Export
- Parquet Import
- Parquet Export
- Querying Parquet Files
- File Access with the file: Protocol
- Network and Cloud Storage
- Overview
- HTTP Parquet Import
- S3 Parquet Import
- S3 Parquet Export
- S3 Iceberg Import
- S3 Express One
- GCS Import
- Cloudflare R2 Import
- DuckDB over HTTPS / S3
- Fastly Object Storage Import
- Meta Queries
- Describe Table
- EXPLAIN: Inspect Query Plans
- EXPLAIN ANALYZE: Profile Queries
- List Tables
- Summarize
- DuckDB Environment
- ODBC
- Performance
- Overview
- Environment
- Import
- Schema
- Indexing
- Join Operations
- File Formats
- How to Tune Workloads
- My Workload Is Slow
- Benchmarks
- Python
- Installation
- Executing SQL
- Jupyter Notebooks
- SQL on Pandas
- Import from Pandas
- Export to Pandas
- Import from Numpy
- Export to Numpy
- SQL on Arrow
- Import from Arrow
- Export to Arrow
- Relational API on Pandas
- Multiple Python Threads
- Integration with Ibis
- Integration with Polars
- Using fsspec Filesystems
- SQL Editors
- SQL Features
- Snippets
- Creating Synthetic Data
- Sharing Macros
- Analyzing a Git Repository
- Importing Duckbox Tables
- Copying an In-Memory Database to a File
- Glossary of Terms
- Browsing Offline
- Operations Manual
- Overview
- DuckDB's Footprint
- Logging
- Securing DuckDB
- Non-Deterministic Behavior
- Limits
- Development
- DuckDB Repositories
- Profiling
- Release Calendar
- Roadmap
- Building DuckDB
- Overview
- Build Configuration
- Building Extensions
- Android
- Linux
- macOS
- Raspberry Pi
- Windows
- Python
- R
- Troubleshooting
- Unofficial and Unsupported Platforms
- Benchmark Suite
- Testing
- Internals
- Why DuckDB
- FAQ
- Code of Conduct
- Sitemap
- Live Demo
DuckDB has a powerful macro mechanism that allows creating shorthands for common tasks.
Sharing a Scalar Macro
First, we defined a macro that pretty-prints a non-negative integer as a short string with thousands, millions, and billions (without rounding) as follows:
duckdb pretty_print_integer_macro.duckdb
CREATE MACRO pretty_print_integer(n) AS
CASE
WHEN n >= 1_000_000_000 THEN printf('%dB', n // 1_000_000_000)
WHEN n >= 1_000_000 THEN printf('%dM', n // 1_000_000)
WHEN n >= 1_000 THEN printf('%dk', n // 1_000)
ELSE printf('%d', n)
END;
SELECT pretty_print_integer(25_500_000) AS x;
┌─────────┐
│ x │
│ varchar │
├─────────┤
│ 25M │
└─────────┘
As one would expect, the macro gets persisted in the database.
But this also means that we can host it on an HTTPS endpoint and share it with anyone!
We have published this macro on blobs.duckdb.org
.
You can try it from DuckDB:
duckdb
Make sure that the httpfs
extension is installed:
INSTALL httpfs;
You can now attach to the remote endpoint and use the macro:
ATTACH 'https://blobs.duckdb.org/data/pretty_print_integer_macro.duckdb'
AS pretty_print_macro_db;
SELECT pretty_print_macro_db.pretty_print_integer(42_123) AS x;
┌─────────┐
│ x │
│ varchar │
├─────────┤
│ 42k │
└─────────┘
Sharing a Table Macro
It's also possible to share table macros. For example, we created the checksum
macro as follows:
duckdb compute_table_checksum.duckdb
CREATE MACRO checksum(table_name) AS TABLE
SELECT bit_xor(md5_number(COLUMNS(*)::VARCHAR))
FROM query_table(table_name);
To use it, make sure that the httpfs
extension is installed:
INSTALL httpfs;
You can attach to the remote endpoint and use the macro:
ATTACH 'https://blobs.duckdb.org/data/compute_table_checksum.duckdb'
AS compute_table_checksum_db;
CREATE TABLE stations AS
FROM 'https://blobs.duckdb.org/stations.parquet';
.mode line
FROM compute_table_checksum_db.checksum('stations');
id = -132780776949939723506211681506129908318
code = 126327004005066229305810236187733612209
uic = -145623335062491121476006068124745817380
name_short = -114540917565721687000878144381189869683
name_medium = -568264780518431562127359918655305384
name_long = 126079956280724674884063510870679874110
slug = -53458800462031706622213217090663245511
country = 143068442936912051858689770843609587944
type = 5665662315470785456147400604088879751
geo_lat = 160608116135251821259126521573759502306
geo_lng = -138297281072655463682926723171691547732