-- read a single parquet file SELECT * FROM 'test.parquet'; -- figure out which columns/types are in a parquet file DESCRIBE SELECT * FROM 'test.parquet'; -- create a table from a parquet file CREATE TABLE test AS SELECT * FROM 'test.parquet'; -- if the file does not end in ".parquet", use the read_parquet function SELECT * FROM read_parquet('test.parq'); -- use list parameter to read 3 parquet files and treat them as a single table SELECT * FROM read_parquet(['file1.parquet', 'file2.parquet', 'file3.parquet']); -- read all files that match the glob pattern SELECT * FROM 'test/*.parquet'; -- read all files that match the glob pattern, and include a "filename" column that specifies which file each row came from SELECT * FROM read_parquet('test/*.parquet', filename=true); -- use a list of globs to read all parquet files from 2 specific folders SELECT * FROM read_parquet(['folder1/*.parquet', 'folder2/*.parquet']); -- query the metadata of a parquet file SELECT * FROM parquet_metadata('test.parquet'); -- query the schema of a parquet file SELECT * FROM parquet_schema('test.parquet'); -- write the results of a query to a parquet file COPY (SELECT * FROM tbl) TO 'result-snappy.parquet' (FORMAT 'parquet'); -- write the results from a query to a parquet file with specific compression and row_group_size COPY (FROM generate_series(100000)) TO 'test.parquet' (FORMAT 'parquet', COMPRESSION 'ZSTD', ROW_GROUP_SIZE 100000); -- export the table contents of the entire database as parquet EXPORT DATABASE 'target_directory' (FORMAT PARQUET);
Parquet files are compressed columnar files that are efficient to load and process. DuckDB provides support for both reading and writing Parquet files in an efficient manner, as well as support for pushing filters and projections into the Parquet file scans.
Parquet files are self-describing, as such far fewer parameters are required than with CSV files. Nevertheless, there are a number of options exposed that can be passed to the
read_parquet function, or the
||Parquet files generated by legacy writers do not correctly set the
||Whether or not an extra
||Whether or not to include the
||Whether or not to interpret the path as a hive partitioned path.||bool||false|
||Whether the columns of multiple schemas should be unified by name, rather than by position.||bool||false|
||Read Parquet file(s)||
If your file ends in
.parquet, the function syntax is optional. The system will automatically infer that you are reading a Parquet file.
SELECT * FROM 'test.parquet';
Multiple files can be read at once by providing a glob or a list of files. Refer to the multiple files section for more information.
DuckDB supports projection pushdown into the Parquet file itself. That is to say, when querying a Parquet file, only the columns required for the query are read. This allows you to read only the part of the Parquet file that you are interested in. This will be done automatically by the system.
DuckDB also supports filter pushdown into the Parquet reader. When you apply a filter to a column that is scanned from a Parquet file, the filter will be pushed down into the scan, and can even be used to skip parts of the file using the built-in zonemaps. Note that this will depend on whether or not your Parquet file contains zonemaps.
Filter and projection pushdown provide significant performance benefits. See our blog post on this for more information.
You can also insert the data into a table or create a table from the parquet file directly. This will load the data from the parquet file and insert it into the database.
-- insert the data from the parquet file in the table INSERT INTO people SELECT * FROM read_parquet('test.parquet'); -- create a table directly from a parquet file CREATE TABLE people AS SELECT * FROM read_parquet('test.parquet');
If you wish to keep the data stored inside the parquet file, but want to query the parquet file directly, you can create a view over the
read_parquet function. You can then query the parquet file as if it were a built-in table.
-- create a view over the parquet file CREATE VIEW people AS SELECT * FROM read_parquet('test.parquet'); -- query the parquet file SELECT * FROM people;
DuckDB also has support for writing to Parquet files using the
COPY statement syntax. See the
COPY Statement page for details, including all possible parameters for the
-- write a query to a snappy compressed parquet file COPY (SELECT * FROM tbl) TO 'result-snappy.parquet' (FORMAT 'parquet') -- write "tbl" to a zstd compressed parquet file COPY tbl TO 'result-zstd.parquet' (FORMAT 'PARQUET', CODEC 'ZSTD') -- write a csv file to an uncompressed parquet file COPY 'test.csv' TO 'result-uncompressed.parquet' (FORMAT 'PARQUET', CODEC 'UNCOMPRESSED') -- write a query to a parquet file with ZSTD compression (same as CODEC) and row_group_size COPY (FROM generate_series(100000)) TO 'row-groups-zstd.parquet' (FORMAT PARQUET, COMPRESSION ZSTD, ROW_GROUP_SIZE 100000);
EXPORT command can be used to export an entire database to a series of Parquet files. See the Export statement documentation for more details.
-- export the table contents of the entire database as parquet EXPORT DATABASE 'target_directory' (FORMAT PARQUET);
The support for Parquet files is enabled via extension. The
parquet extension is bundled with almost all clients. However, if your client does not bundle the
parquet extension, the extension must be installed and loaded separately.
-- run once INSTALL parquet; -- run before usage LOAD parquet;