datafusion-cli
¶
The DataFusion CLI is a command-line interactive SQL utility for executing queries against any supported data files. It is a convenient way to try DataFusion’s SQL support with your own data.
Example¶
Create a CSV file to query.
$ echo "a,b" > data.csv
$ echo "1,2" >> data.csv
Query that single file (the CLI also supports parquet, compressed csv, avro, json and more)
$ datafusion-cli
DataFusion CLI v17.0.0
❯ select * from 'data.csv';
+---+---+
| a | b |
+---+---+
| 1 | 2 |
+---+---+
1 row in set. Query took 0.007 seconds.
You can also query directories of files with compatible schemas:
$ ls data_dir/
data.csv data2.csv
$ datafusion-cli
DataFusion CLI v16.0.0
❯ select * from 'data_dir';
+---+---+
| a | b |
+---+---+
| 3 | 4 |
| 1 | 2 |
+---+---+
2 rows in set. Query took 0.007 seconds.
Installation¶
Install and run using Cargo¶
The easiest way to install DataFusion CLI a spin is via cargo install datafusion-cli
.
Install and run using Homebrew (on MacOS)¶
DataFusion CLI can also be installed via Homebrew (on MacOS). Install it as any other pre-built software like this:
brew install datafusion
# ==> Downloading https://ghcr.io/v2/homebrew/core/datafusion/manifests/12.0.0
# ######################################################################## 100.0%
# ==> Downloading https://ghcr.io/v2/homebrew/core/datafusion/blobs/sha256:9ecc8a01be47ceb9a53b39976696afa87c0a8
# ==> Downloading from https://pkg-containers.githubusercontent.com/ghcr1/blobs/sha256:9ecc8a01be47ceb9a53b39976
# ######################################################################## 100.0%
# ==> Pouring datafusion--12.0.0.big_sur.bottle.tar.gz
# 🍺 /usr/local/Cellar/datafusion/12.0.0: 9 files, 17.4MB
datafusion-cli
Run using Docker¶
There is no officially published Docker image for the DataFusion CLI, so it is necessary to build from source instead.
Use the following commands to clone this repository and build a Docker image containing the CLI tool. Note
that there is .dockerignore
file in the root of the repository that may need to be deleted in order for
this to work.
git clone https://github.com/apache/arrow-datafusion
cd arrow-datafusion
git checkout 12.0.0
docker build -f datafusion-cli/Dockerfile . --tag datafusion-cli
docker run -it -v $(your_data_location):/data datafusion-cli
Usage¶
See the current usage using datafusion-cli --help
:
Apache Arrow <dev@arrow.apache.org>
Command Line Client for DataFusion query engine.
USAGE:
datafusion-cli [OPTIONS]
OPTIONS:
-c, --batch-size <BATCH_SIZE> The batch size of each query, or use DataFusion default
-f, --file <FILE>... Execute commands from file(s), then exit
--format <FORMAT> [default: table] [possible values: csv, tsv, table, json,
nd-json]
-h, --help Print help information
-p, --data-path <DATA_PATH> Path to your data, default to current directory
-q, --quiet Reduce printing other than the results and work quietly
-r, --rc <RC>... Run the provided files on startup instead of ~/.datafusionrc
-V, --version Print version information
Selecting files directly¶
Files can be queried directly by enclosing the file or
directory name in single '
quotes as shown in the example.
It is also possible to create a table backed by files by explicitly
via CREATE EXTERNAL TABLE
as shown below.
Registering Parquet Data Sources¶
Parquet data sources can be registered by executing a CREATE EXTERNAL TABLE
SQL statement. It is not necessary to provide schema information for Parquet files.
CREATE EXTERNAL TABLE taxi
STORED AS PARQUET
LOCATION '/mnt/nyctaxi/tripdata.parquet';
Registering CSV Data Sources¶
CSV data sources can be registered by executing a CREATE EXTERNAL TABLE
SQL statement.
CREATE EXTERNAL TABLE test
STORED AS CSV
WITH HEADER ROW
LOCATION '/path/to/aggregate_test_100.csv';
It is also possible to provide schema information.
CREATE EXTERNAL TABLE test (
c1 VARCHAR NOT NULL,
c2 INT NOT NULL,
c3 SMALLINT NOT NULL,
c4 SMALLINT NOT NULL,
c5 INT NOT NULL,
c6 BIGINT NOT NULL,
c7 SMALLINT NOT NULL,
c8 INT NOT NULL,
c9 BIGINT NOT NULL,
c10 VARCHAR NOT NULL,
c11 FLOAT NOT NULL,
c12 DOUBLE NOT NULL,
c13 VARCHAR NOT NULL
)
STORED AS CSV
LOCATION '/path/to/aggregate_test_100.csv';
Registering S3 Data Sources¶
AWS S3 data sources can be registered by executing a CREATE EXTERNAL TABLE
SQL statement.
CREATE EXTERNAL TABLE test
STORED AS PARQUET
OPTIONS(
'access_key_id' '******',
'secret_access_key' '******',
'region' 'us-east-2'
)
LOCATION 's3://bucket/path/file.parquet';
The supported OPTIONS are:
access_key_id
secret_access_key
session_token
region
It is also possible to simplify sql statements by environment variables.
$ export AWS_DEFAULT_REGION=us-east-2
$ export AWS_SECRET_ACCESS_KEY=******
$ export AWS_ACCESS_KEY_ID=******
$ datafusion-cli
DataFusion CLI v21.0.0
❯ create external table test stored as parquet location 's3://bucket/path/file.parquet';
0 rows in set. Query took 0.374 seconds.
❯ select * from test;
+----------+----------+
| column_1 | column_2 |
+----------+----------+
| 1 | 2 |
+----------+----------+
1 row in set. Query took 0.171 seconds.
Details of the environment variables that can be used are:
AWS_ACCESS_KEY_ID -> access_key_id
AWS_SECRET_ACCESS_KEY -> secret_access_key
AWS_DEFAULT_REGION -> region
AWS_ENDPOINT -> endpoint
AWS_SESSION_TOKEN -> token
AWS_CONTAINER_CREDENTIALS_RELATIVE_URI -> https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html
AWS_ALLOW_HTTP -> set to “true” to permit HTTP connections without TLS
AWS_PROFILE -> Support for using a named profile to supply credentials
Registering OSS Data Sources¶
Alibaba cloud OSS data sources can be registered by executing a CREATE EXTERNAL TABLE
SQL statement.
CREATE EXTERNAL TABLE test
STORED AS PARQUET
OPTIONS(
'access_key_id' '******',
'secret_access_key' '******',
'endpoint' 'https://bucket.oss-cn-hangzhou.aliyuncs.com'
)
LOCATION 'oss://bucket/path/file.parquet';
The supported OPTIONS are:
access_key_id
secret_access_key
endpoint
Note that the endpoint
format of oss needs to be: https://{bucket}.{oss-region-endpoint}
Registering GCS Data Sources¶
Google Cloud Storage data sources can be registered by executing a CREATE EXTERNAL TABLE
SQL statement.
CREATE EXTERNAL TABLE test
STORED AS PARQUET
OPTIONS(
'service_account_path' '/tmp/gcs.json',
)
LOCATION 'gs://bucket/path/file.parquet';
The supported OPTIONS are:
service_account_path -> location of service account file
service_account_key -> JSON serialized service account key
application_credentials_path -> location of application credentials file
It is also possible to simplify sql statements by environment variables.
$ export GOOGLE_SERVICE_ACCOUNT=/tmp/gcs.json
$ datafusion-cli
DataFusion CLI v21.0.0
❯ create external table test stored as parquet location 'gs://bucket/path/file.parquet';
0 rows in set. Query took 0.374 seconds.
❯ select * from test;
+----------+----------+
| column_1 | column_2 |
+----------+----------+
| 1 | 2 |
+----------+----------+
1 row in set. Query took 0.171 seconds.
Details of the environment variables that can be used are:
GOOGLE_SERVICE_ACCOUNT: location of service account file
GOOGLE_SERVICE_ACCOUNT_PATH: (alias) location of service account file
SERVICE_ACCOUNT: (alias) location of service account file
GOOGLE_SERVICE_ACCOUNT_KEY: JSON serialized service account key
GOOGLE_BUCKET: bucket name
GOOGLE_BUCKET_NAME: (alias) bucket name
Commands¶
Available commands inside DataFusion CLI are:
Quit
> \q
Help
> \?
ListTables
> \d
DescribeTable
> \d table_name
QuietMode
> \quiet [true|false]
list function
> \h
Search and describe function
> \h function
Show configuration options
> show all;
+-------------------------------------------------+---------+
| name | setting |
+-------------------------------------------------+---------+
| datafusion.execution.batch_size | 8192 |
| datafusion.execution.coalesce_batches | true |
| datafusion.execution.time_zone | UTC |
| datafusion.explain.logical_plan_only | false |
| datafusion.explain.physical_plan_only | false |
| datafusion.optimizer.filter_null_join_keys | false |
| datafusion.optimizer.skip_failed_rules | true |
+-------------------------------------------------+---------+
Set configuration options
> SET datafusion.execution.batch_size to 1024;
Changing Configuration Options¶
All available configuration options can be seen using SHOW ALL
as described above.
You can change the configuration options using environment
variables. datafusion-cli
looks in the corresponding environment
variable with an upper case name and all .
converted to _
.
For example, to set datafusion.execution.batch_size
to 1024
you
would set the DATAFUSION_EXECUTION_BATCH_SIZE
environment variable
appropriately:
$ DATAFUSION_EXECUTION_BATCH_SIZE=1024 datafusion-cli
DataFusion CLI v12.0.0
❯ show all;
+-------------------------------------------------+---------+
| name | setting |
+-------------------------------------------------+---------+
| datafusion.execution.batch_size | 1024 |
| datafusion.execution.coalesce_batches | true |
| datafusion.execution.time_zone | UTC |
| datafusion.explain.logical_plan_only | false |
| datafusion.explain.physical_plan_only | false |
| datafusion.optimizer.filter_null_join_keys | false |
| datafusion.optimizer.skip_failed_rules | true |
+-------------------------------------------------+---------+
8 rows in set. Query took 0.002 seconds.
You can change the configuration options using SET
statement as well
$ datafusion-cli
DataFusion CLI v13.0.0
❯ show datafusion.execution.batch_size;
+---------------------------------+---------+
| name | setting |
+---------------------------------+---------+
| datafusion.execution.batch_size | 8192 |
+---------------------------------+---------+
1 row in set. Query took 0.011 seconds.
❯ set datafusion.execution.batch_size to 1024;
0 rows in set. Query took 0.000 seconds.
❯ show datafusion.execution.batch_size;
+---------------------------------+---------+
| name | setting |
+---------------------------------+---------+
| datafusion.execution.batch_size | 1024 |
+---------------------------------+---------+
1 row in set. Query took 0.005 seconds.