Allows reading and writing files over SSH
Maintainer(s):
onnimonni
Installing and Loading
INSTALL sshfs FROM community;
LOAD sshfs;
Example
-- Install & load the extension
INSTALL sshfs FROM community;
LOAD sshfs;
-- Authenticate with SSH key file
CREATE SECRET my_hetzner_storagebox (
TYPE SSH,
USERNAME 'user',
KEY_PATH '/Users/' || getenv('USER') || '/.ssh/storagebox_key',
PORT 23
SCOPE 'sshfs://u123456.your-storagebox.de'
);
-- or with password
CREATE SECRET my_server (
TYPE SSH,
USERNAME 'user',
PASSWORD 'password',
SCOPE 'sshfs://your-server.example.com'
);
-- Write data to remote server
COPY (SELECT * FROM large_table)
TO 'sshfs://your-server.example.com/data.parquet';
-- Read The uploaded parquet file using WebDAV
SELECT * FROM 'sshfs://your-server.example.com/data.parquet';
About sshfs
DuckDB sshfs extension enables seamless integration with servers through ssh, allowing users to read from and write to remote file systems directly within DuckDB.
See: https://github.com/midwork-finds-jobs/duckdb-sshfs/blob/main/README.md for more examples and details.
Added Settings
| name | description | input_type | scope | aliases |
|---|---|---|---|---|
| sshfs_chunk_size_mb | Chunk size in MB for uploads (default: 50MB, larger chunks may improve throughput but use more memory) | BIGINT | GLOBAL | [] |
| sshfs_debug_logging | Enable debug logging for SSHFS operations | BOOLEAN | GLOBAL | [] |
| sshfs_initial_retry_delay_ms | Initial delay in milliseconds between retries, with exponential backoff (default: 1000) | BIGINT | GLOBAL | [] |
| sshfs_max_concurrent_uploads | Maximum number of concurrent chunk uploads (default: 2, higher values may improve speed but use more connections) | BIGINT | GLOBAL | [] |
| sshfs_max_retries | Maximum number of connection retry attempts (default: 3) | BIGINT | GLOBAL | [] |
| sshfs_timeout_seconds | Timeout in seconds for SSH operations (default: 300 = 5 minutes) | BIGINT | GLOBAL | [] |