- Installation
- Guides
- Overview
- Data Import & Export
- CSV Import
- CSV Export
- Parquet Import
- Parquet Export
- Querying Parquet Files
- HTTP Parquet Import
- S3 Parquet Import
- S3 Parquet Export
- S3 Iceberg Import
- JSON Import
- JSON Export
- Excel Import
- Excel Export
- SQLite Import
- PostgreSQL Import
- Meta Queries
- ODBC
- Python
- Installation
- Execution SQL
- Jupyter Notebooks
- SQL on Pandas
- Import from Pandas
- Export to Pandas
- SQL on Arrow
- Import from Arrow
- Export to Arrow
- Relational API on Pandas
- Multiple Python Threads
- Integration with Ibis
- Integration with Polars
- Using fsspec Filesystems
- SQL Features
- SQL Editors
- Data Viewers
- Documentation
- Connect
- Data Import
- Overview
- CSV Files
- JSON Files
- Multiple Files
- Parquet Files
- Partitioning
- Appender
- INSERT Statements
- Client APIs
- Overview
- C
- Overview
- Startup
- Configuration
- Query
- Data Chunks
- Values
- Types
- Prepared Statements
- Appender
- Table Functions
- Replacement Scans
- API Reference
- C++
- CLI
- Go
- Java
- Julia
- Node.js
- Python
- Overview
- Data Ingestion
- Result Conversion
- DB API
- Relational API
- Function API
- Types API
- Expression API
- Spark API
- API Reference
- Known Python Issues
- R
- Rust
- Swift
- Wasm
- ADBC
- ODBC
- SQL
- Introduction
- Statements
- Overview
- ALTER TABLE
- ALTER VIEW
- ATTACH/DETACH
- CALL
- CHECKPOINT
- COPY
- CREATE INDEX
- CREATE MACRO
- CREATE SCHEMA
- CREATE SEQUENCE
- CREATE TABLE
- CREATE VIEW
- CREATE TYPE
- DELETE
- DROP
- EXPORT/IMPORT DATABASE
- INSERT
- PIVOT
- Profiling
- SELECT
- SET/RESET
- Transaction Management
- UNPIVOT
- UPDATE
- USE
- VACUUM
- Query Syntax
- SELECT
- FROM & JOIN
- WHERE
- GROUP BY
- GROUPING SETS
- HAVING
- ORDER BY
- LIMIT
- SAMPLE
- Unnesting
- WITH
- WINDOW
- QUALIFY
- VALUES
- FILTER
- Set Operations
- Prepared Statements
- Data Types
- Overview
- Array
- Bitstring
- Blob
- Boolean
- Date
- Enum
- Interval
- List
- Map
- NULL Values
- Numeric
- Struct
- Text
- Time
- Timestamp
- Time Zones
- Union
- Expressions
- Overview
- CASE statement
- Casting
- Collations
- Comparisons
- IN Operator
- Logical Operators
- Star Expression
- Subqueries
- Functions
- Overview
- Bitstring Functions
- Blob Functions
- Date Format Functions
- Date Functions
- Date Part Functions
- Enum Functions
- Interval Functions
- Nested Functions
- Numeric Functions
- Pattern Matching
- Text Functions
- Time Functions
- Timestamp Functions
- Timestamp with Time Zone Functions
- Utility Functions
- Aggregate Functions
- Configuration
- Constraints
- Indexes
- Information Schema
- Metadata Functions
- Pragmas
- Rules for Case Sensitivity
- Samples
- Window Functions
- Extensions
- Development
- DuckDB Repositories
- Testing
- Internals Overview
- Storage Versions & Format
- Execution Format
- Profiling
- Release Dates
- Building
- Benchmark Suite
- Sitemap
- Why DuckDB
- Media
- FAQ
- Code of Conduct
- Live Demo
Loops can be used in sqllogictests when it is required to execute the same query many times but with slight modifications in constant values. For example, suppose we want to fire off 100 queries that check for the presence of the values 0..100
in a table:
# create the table integers with the values 0..100
statement ok
CREATE TABLE integers AS SELECT * FROM range(0, 100, 1) t1(i);
# verify individually that all 100 values are there
loop i 0 100
# execute the query, replacing the value
query I
SELECT COUNT(*) FROM integers WHERE i=${i};
----
1
# end the loop (note that multiple statements can be part of a loop)
endloop
Similarly, foreach
can be used to iterate over a set of values.
foreach partcode millennium century decade year quarter month day hour minute second millisecond microsecond epoch
query III
SELECT i, DATE_PART('${partcode}', i) AS p, DATE_PART(['${partcode}'], i) AS st
FROM intervals
WHERE p <> st['${partcode}'];
----
endloop
foreach
also has a number of preset combinations that should be used when required. In this manner, when new combinations are added to the preset, old tests will automatically pick up these new combinations.
| Preset | Expansion |
|——————|—————————————————————-|
| <compression>
|
none uncompressed rle bitpacking dictionary fsst chimp patas |
|
Use large loops sparingly. Executing hundreds of thousands of SQL statements will slow down tests unnecessarily. Do not use loops for inserting data.
Data Generation without Loops
Loops should be used sparingly. While it might be tempting to use loops for inserting data using insert statements, this will considerably slow down the test cases. Instead, it is better to generate data using the built-in range
and repeat
functions.
-- create the table integers with the values [0, 1, .., 98, 99]
CREATE TABLE integers AS SELECT * FROM range(0, 100, 1) t1(i);
-- create the table strings with 100X the value "hello"
CREATE TABLE strings AS SELECT 'hello' AS s FROM range(0, 100, 1);
Using these two functions, together with clever use of cross products and other expressions, many different types of datasets can be efficiently generated. The RANDOM()
function can also be used to generate random data.
An alternative option is to read data from an existing CSV or Parquet file. There are several large CSV files that can be loaded from the directory test/sql/copy/csv/data/real
using a COPY INTO
statement or the read_csv_auto
function.
The TPC-H and TPC-DS extensions can also be used to generate synthetic data, using e.g. CALL dbgen(sf=1)
or CALL dsdgen(sf=1)
.