Arrow Datasets allow you to query against data that has been split across
multiple files. This sharding of data may indicate partitioning, which
can accelerate queries that only touch some partitions (files). Call
open_dataset() to point to a directory of data files and return a
Dataset, then use
dplyr methods to query it.
schema = NULL,
partitioning = hive_partition(),
unify_schemas = NULL,
a string path to a directory containing data files
a list of
Dataset objects as created by this function
a list of
DatasetFactory objects as created by
Schema for the dataset. If
NULL (the default), the schema
will be inferred from the data sources.
sources is a file path, one of
Schema, in which case the file paths relative to
sources will be
parsed, and path segments will be matched with the schema fields. For
schema(year = int16(), month = int8()) would create partitions
for file paths like "2019/01/file.parquet", "2019/02/file.parquet", etc.
a character vector that defines the field names corresponding to those
path segments (that is, you're providing the names that would correspond
Schema but the types will be autodetected)
HivePartitioningFactory, as returned
hive_partition() which parses explicit or autodetected fields from
Hive-style path segments
NULL for no partitioning
The default is to autodetect Hive-style partitions.
logical: should all data fragments (files,
be scanned in order to create a unified schema from them? If
the first fragment will be inspected for its schema. Use this
fast path when you know and trust that all fragments have an identical schema.
The default is
FALSE when creating a dataset from a file path (because
there may be many files and scanning may be slow) but
is a list of
Datasets (because there should be few
Datasets in the list
Schemas are already in memory).
additional arguments passed to
sources is a file path, otherwise ignored.
A Dataset R6 object. Use
dplyr methods on it to query the data,
$NewScan() to construct a query directly.