Search Shortcut cmd + k | ctrl + k
Search cmd+k ctrl+k
0.10 (stable)
Reading and Writing Parquet Files

Examples

-- read a single Parquet file
SELECT * FROM 'test.parquet';
-- figure out which columns/types are in a Parquet file
DESCRIBE SELECT * FROM 'test.parquet';
-- create a table from a Parquet file
CREATE TABLE test AS SELECT * FROM 'test.parquet';
-- if the file does not end in ".parquet", use the read_parquet function
SELECT * FROM read_parquet('test.parq');
-- use list parameter to read 3 Parquet files and treat them as a single table
SELECT * FROM read_parquet(['file1.parquet', 'file2.parquet', 'file3.parquet']);
-- read all files that match the glob pattern
SELECT * FROM 'test/*.parquet';
-- read all files that match the glob pattern, and include a "filename" column
-- that specifies which file each row came from
SELECT * FROM read_parquet('test/*.parquet', filename = true);
-- use a list of globs to read all Parquet files from 2 specific folders
SELECT * FROM read_parquet(['folder1/*.parquet', 'folder2/*.parquet']);
-- read over https
SELECT * FROM read_parquet('https://some.url/some_file.parquet');
-- query the metadata of a Parquet file
SELECT * FROM parquet_metadata('test.parquet');
-- query the schema of a Parquet file
SELECT * FROM parquet_schema('test.parquet');

-- write the results of a query to a Parquet file using the default compression (Snappy)
COPY
    (SELECT * FROM tbl)
    TO 'result-snappy.parquet'
    (FORMAT 'parquet');

-- write the results from a query to a Parquet file with specific compression and row group size
COPY
    (FROM generate_series(100_000))
    TO 'test.parquet'
    (FORMAT 'parquet', COMPRESSION 'zstd', ROW_GROUP_SIZE 100_000);

-- export the table contents of the entire database as parquet
EXPORT DATABASE 'target_directory' (FORMAT PARQUET);

Parquet Files

Parquet files are compressed columnar files that are efficient to load and process. DuckDB provides support for both reading and writing Parquet files in an efficient manner, as well as support for pushing filters and projections into the Parquet file scans.

Parquet data sets differ based on the number of files, the size of individual files, the compression algorithm used row group size, etc. These have a significant effect on performance. Please consult the Performance Guide for details.

read_parquet Function

Function Description Example
read_parquet(path_or_list_of_paths) Read Parquet file(s) SELECT * FROM read_parquet('test.parquet');
parquet_scan(path_or_list_of_paths) Alias for read_parquet SELECT * FROM parquet_scan('test.parquet');

If your file ends in .parquet, the function syntax is optional. The system will automatically infer that you are reading a Parquet file.

SELECT * FROM 'test.parquet';

Multiple files can be read at once by providing a glob or a list of files. Refer to the multiple files section for more information.

Parameters

There are a number of options exposed that can be passed to the read_parquet function or the COPY statement.

Name Description Type Default
binary_as_string Parquet files generated by legacy writers do not correctly set the UTF8 flag for strings, causing string columns to be loaded as BLOB instead. Set this to true to load binary columns as strings. BOOL false
encryption_config Configuration for Parquet encryption. STRUCT -
filename Whether or not an extra filename column should be included in the result. BOOL false
file_row_number Whether or not to include the file_row_number column. BOOL false
hive_partitioning Whether or not to interpret the path as a Hive partitioned path. BOOL false
union_by_name Whether the columns of multiple schemas should be unified by name, rather than by position. BOOL false

Partial Reading

DuckDB supports projection pushdown into the Parquet file itself. That is to say, when querying a Parquet file, only the columns required for the query are read. This allows you to read only the part of the Parquet file that you are interested in. This will be done automatically by DuckDB.

DuckDB also supports filter pushdown into the Parquet reader. When you apply a filter to a column that is scanned from a Parquet file, the filter will be pushed down into the scan, and can even be used to skip parts of the file using the built-in zonemaps. Note that this will depend on whether or not your Parquet file contains zonemaps.

Filter and projection pushdown provide significant performance benefits. See our blog post on this for more information.

Inserts and Views

You can also insert the data into a table or create a table from the Parquet file directly. This will load the data from the Parquet file and insert it into the database.

-- insert the data from the Parquet file in the table
INSERT INTO people SELECT * FROM read_parquet('test.parquet');
-- create a table directly from a Parquet file
CREATE TABLE people AS SELECT * FROM read_parquet('test.parquet');

If you wish to keep the data stored inside the Parquet file, but want to query the Parquet file directly, you can create a view over the read_parquet function. You can then query the Parquet file as if it were a built-in table.

-- create a view over the Parquet file
CREATE VIEW people AS SELECT * FROM read_parquet('test.parquet');
-- query the Parquet file
SELECT * FROM people;

Writing to Parquet Files

DuckDB also has support for writing to Parquet files using the COPY statement syntax. See the COPY Statement page for details, including all possible parameters for the COPY statement.

-- write a query to a snappy compressed Parquet file
COPY
    (SELECT * FROM tbl)
    TO 'result-snappy.parquet'
    (FORMAT 'parquet');
-- write "tbl" to a zstd compressed Parquet file
COPY tbl
    TO 'result-zstd.parquet'
    (FORMAT 'parquet', CODEC 'zstd');
-- write a CSV file to an uncompressed Parquet file
COPY
    'test.csv'
    TO 'result-uncompressed.parquet'
    (FORMAT 'parquet', CODEC 'uncompressed');
-- write a query to a Parquet file with ZSTD compression (same as CODEC) and row_group_size
COPY
    (FROM generate_series(100_000))
    TO 'row-groups-zstd.parquet'
    (FORMAT PARQUET, COMPRESSION ZSTD, ROW_GROUP_SIZE 100_000);

LZ4 compression is currently only available in the nightly and source builds.

-- write a CSV file to an LZ4_RAW-compressed Parquet file
COPY
    (FROM generate_series(100_000))
    TO 'result-lz4.parquet'
    (FORMAT PARQUET, COMPRESSION LZ4);
-- or
COPY
    (FROM generate_series(100_000))
    TO 'result-lz4.parquet'
    (FORMAT PARQUET, COMPRESSION LZ4_RAW);

DuckDB’s EXPORT command can be used to export an entire database to a series of Parquet files. See the Export statement documentation for more details.

-- export the table contents of the entire database as parquet
EXPORT DATABASE 'target_directory' (FORMAT PARQUET);

Encryption

DuckDB supports reading and writing encrypted Parquet files.

Installing and Loading the Parquet Extension

The support for Parquet files is enabled via extension. The parquet extension is bundled with almost all clients. However, if your client does not bundle the parquet extension, the extension must be installed and loaded separately.

INSTALL parquet;
LOAD parquet;

Pages in This Section

About this page

Last modified: 2024-04-25