Tabular Datasets

Warning

The arrow::dataset namespace is experimental, and a stable API is not yet guaranteed.

The Arrow Datasets library provides functionality to efficiently work with tabular, potentially larger than memory, and multi-file datasets. This includes:

  • A unified interface that supports different sources and file formats (currently, Parquet, ORC, Feather / Arrow IPC, and CSV files) and different file systems (local, cloud).

  • Discovery of sources (crawling directories, handling partitioned datasets with various partitioning schemes, basic schema normalization, …)

  • Optimized reading with predicate pushdown (filtering rows), projection (selecting and deriving columns), and optionally parallel reading.

The goal is to expand support to other file formats and data sources (e.g. database connections) in the future.

Reading Datasets

For the examples below, let’s create a small dataset consisting of a directory with two parquet files:

52 // Generate some data for the rest of this example.
53 std::shared_ptr<arrow::Table> CreateTable() {
54   auto schema =
55       arrow::schema({arrow::field("a", arrow::int64()), arrow::field("b", arrow::int64()),
56                      arrow::field("c", arrow::int64())});
57   std::shared_ptr<arrow::Array> array_a;
58   std::shared_ptr<arrow::Array> array_b;
59   std::shared_ptr<arrow::Array> array_c;
60   arrow::NumericBuilder<arrow::Int64Type> builder;
61   ABORT_ON_FAILURE(builder.AppendValues({0, 1, 2, 3, 4, 5, 6, 7, 8, 9}));
62   ABORT_ON_FAILURE(builder.Finish(&array_a));
63   builder.Reset();
64   ABORT_ON_FAILURE(builder.AppendValues({9, 8, 7, 6, 5, 4, 3, 2, 1, 0}));
65   ABORT_ON_FAILURE(builder.Finish(&array_b));
66   builder.Reset();
67   ABORT_ON_FAILURE(builder.AppendValues({1, 2, 1, 2, 1, 2, 1, 2, 1, 2}));
68   ABORT_ON_FAILURE(builder.Finish(&array_c));
69   return arrow::Table::Make(schema, {array_a, array_b, array_c});
70 }
71 
72 // Set up a dataset by writing two Parquet files.
73 std::string CreateExampleParquetDataset(const std::shared_ptr<fs::FileSystem>& filesystem,
74                                         const std::string& root_path) {
75   auto base_path = root_path + "/parquet_dataset";
76   ABORT_ON_FAILURE(filesystem->CreateDir(base_path));
77   // Create an Arrow Table
78   auto table = CreateTable();
79   // Write it into two Parquet files
80   auto output = filesystem->OpenOutputStream(base_path + "/data1.parquet").ValueOrDie();
81   ABORT_ON_FAILURE(parquet::arrow::WriteTable(
82       *table->Slice(0, 5), arrow::default_memory_pool(), output, /*chunk_size=*/2048));
83   output = filesystem->OpenOutputStream(base_path + "/data2.parquet").ValueOrDie();
84   ABORT_ON_FAILURE(parquet::arrow::WriteTable(
85       *table->Slice(5), arrow::default_memory_pool(), output, /*chunk_size=*/2048));
86   return base_path;
87 }

(See the full example at bottom: A note on transactions & ACID guarantees.)

Dataset discovery

A arrow::dataset::Dataset object can be created using the various arrow::dataset::DatasetFactory objects. Here, we’ll use the arrow::dataset::FileSystemDatasetFactory, which can create a dataset given a base directory path:

159 // Read the whole dataset with the given format, without partitioning.
160 std::shared_ptr<arrow::Table> ScanWholeDataset(
161     const std::shared_ptr<fs::FileSystem>& filesystem,
162     const std::shared_ptr<ds::FileFormat>& format, const std::string& base_dir) {
163   // Create a dataset by scanning the filesystem for files
164   fs::FileSelector selector;
165   selector.base_dir = base_dir;
166   auto factory = ds::FileSystemDatasetFactory::Make(filesystem, selector, format,
167                                                     ds::FileSystemFactoryOptions())
168                      .ValueOrDie();
169   auto dataset = factory->Finish().ValueOrDie();
170   // Print out the fragments
171   for (const auto& fragment : dataset->GetFragments().ValueOrDie()) {
172     std::cout << "Found fragment: " << (*fragment)->ToString() << std::endl;
173   }
174   // Read the entire dataset as a Table
175   auto scan_builder = dataset->NewScan().ValueOrDie();
176   auto scanner = scan_builder->Finish().ValueOrDie();
177   return scanner->ToTable().ValueOrDie();
178 }

We’re also passing the filesystem to use and the file format to use for reading. This lets us choose between (for example) reading local files or files in Amazon S3, or between Parquet and CSV.

In addition to searching a base directory, we can list file paths manually.

Creating a arrow::dataset::Dataset does not begin reading the data itself. It only crawls the directory to find all the files (if needed), which can be retrieved with arrow::dataset::FileSystemDataset::files():

// Print out the files crawled (only for FileSystemDataset)
for (const auto& filename : dataset->files()) {
  std::cout << filename << std::endl;
}

…and infers the dataset’s schema (by default from the first file):

std::cout << dataset->schema()->ToString() << std::endl;

Using the arrow::dataset::Dataset::NewScan() method, we can build a arrow::dataset::Scanner and read the dataset (or a portion of it) into a arrow::Table with the arrow::dataset::Scanner::ToTable() method:

159 // Read the whole dataset with the given format, without partitioning.
160 std::shared_ptr<arrow::Table> ScanWholeDataset(
161     const std::shared_ptr<fs::FileSystem>& filesystem,
162     const std::shared_ptr<ds::FileFormat>& format, const std::string& base_dir) {
163   // Create a dataset by scanning the filesystem for files
164   fs::FileSelector selector;
165   selector.base_dir = base_dir;
166   auto factory = ds::FileSystemDatasetFactory::Make(filesystem, selector, format,
167                                                     ds::FileSystemFactoryOptions())
168                      .ValueOrDie();
169   auto dataset = factory->Finish().ValueOrDie();
170   // Print out the fragments
171   for (const auto& fragment : dataset->GetFragments().ValueOrDie()) {
172     std::cout << "Found fragment: " << (*fragment)->ToString() << std::endl;
173   }
174   // Read the entire dataset as a Table
175   auto scan_builder = dataset->NewScan().ValueOrDie();
176   auto scanner = scan_builder->Finish().ValueOrDie();
177   return scanner->ToTable().ValueOrDie();
178 }

Note

Depending on the size of your dataset, this can require a lot of memory; see Filtering data below on filtering/projecting.

Reading different file formats

The above examples use Parquet files on local disk, but the Dataset API provides a consistent interface across multiple file formats and filesystems. (See Reading from cloud storage for more information on the latter.) Currently, Parquet, ORC, Feather / Arrow IPC, and CSV file formats are supported; more formats are planned in the future.

If we save the table as Feather files instead of Parquet files:

 91 // Set up a dataset by writing two Feather files.
 92 std::string CreateExampleFeatherDataset(const std::shared_ptr<fs::FileSystem>& filesystem,
 93                                         const std::string& root_path) {
 94   auto base_path = root_path + "/feather_dataset";
 95   ABORT_ON_FAILURE(filesystem->CreateDir(base_path));
 96   // Create an Arrow Table
 97   auto table = CreateTable();
 98   // Write it into two Feather files
 99   auto output = filesystem->OpenOutputStream(base_path + "/data1.feather").ValueOrDie();
100   auto writer = arrow::ipc::MakeFileWriter(output.get(), table->schema()).ValueOrDie();
101   ABORT_ON_FAILURE(writer->WriteTable(*table->Slice(0, 5)));
102   ABORT_ON_FAILURE(writer->Close());
103   output = filesystem->OpenOutputStream(base_path + "/data2.feather").ValueOrDie();
104   writer = arrow::ipc::MakeFileWriter(output.get(), table->schema()).ValueOrDie();
105   ABORT_ON_FAILURE(writer->WriteTable(*table->Slice(5)));
106   ABORT_ON_FAILURE(writer->Close());
107   return base_path;
108 }

…then we can read the Feather file by passing an arrow::dataset::IpcFileFormat:

auto format = std::make_shared<ds::ParquetFileFormat>();
// ...
auto factory = ds::FileSystemDatasetFactory::Make(filesystem, selector, format, options)
                   .ValueOrDie();

Customizing file formats

arrow::dataset::FileFormat objects have properties that control how files are read. For example:

auto format = std::make_shared<ds::ParquetFileFormat>();
format->reader_options.dict_columns.insert("a");

Will configure column "a" to be dictionary-encoded when read. Similarly, setting arrow::dataset::CsvFileFormat::parse_options lets us change things like reading comma-separated or tab-separated data.

Additionally, passing an arrow::dataset::FragmentScanOptions to arrow::dataset::ScannerBuilder::FragmentScanOptions() offers fine-grained control over data scanning. For example, for CSV files, we can change what values are converted into Boolean true and false at scan time.

Filtering data

So far, we’ve been reading the entire dataset, but if we need only a subset of the data, this can waste time or memory reading data we don’t need. The arrow::dataset::Scanner offers control over what data to read.

In this snippet, we use arrow::dataset::ScannerBuilder::Project() to select which columns to read:

182 // Read a dataset, but select only column "b" and only rows where b < 4.
183 //
184 // This is useful when you only want a few columns from a dataset. Where possible,
185 // Datasets will push down the column selection such that less work is done.
186 std::shared_ptr<arrow::Table> FilterAndSelectDataset(
187     const std::shared_ptr<fs::FileSystem>& filesystem,
188     const std::shared_ptr<ds::FileFormat>& format, const std::string& base_dir) {
189   fs::FileSelector selector;
190   selector.base_dir = base_dir;
191   auto factory = ds::FileSystemDatasetFactory::Make(filesystem, selector, format,
192                                                     ds::FileSystemFactoryOptions())
193                      .ValueOrDie();
194   auto dataset = factory->Finish().ValueOrDie();
195   // Read specified columns with a row filter
196   auto scan_builder = dataset->NewScan().ValueOrDie();
197   ABORT_ON_FAILURE(scan_builder->Project({"b"}));
198   ABORT_ON_FAILURE(scan_builder->Filter(cp::less(cp::field_ref("b"), cp::literal(4))));
199   auto scanner = scan_builder->Finish().ValueOrDie();
200   return scanner->ToTable().ValueOrDie();
201 }

Some formats, such as Parquet, can reduce I/O costs here by reading only the specified columns from the filesystem.

A filter can be provided with arrow::dataset::ScannerBuilder::Filter(), so that rows which do not match the filter predicate will not be included in the returned table. Again, some formats, such as Parquet, can use this filter to reduce the amount of I/O needed.

182 // Read a dataset, but select only column "b" and only rows where b < 4.
183 //
184 // This is useful when you only want a few columns from a dataset. Where possible,
185 // Datasets will push down the column selection such that less work is done.
186 std::shared_ptr<arrow::Table> FilterAndSelectDataset(
187     const std::shared_ptr<fs::FileSystem>& filesystem,
188     const std::shared_ptr<ds::FileFormat>& format, const std::string& base_dir) {
189   fs::FileSelector selector;
190   selector.base_dir = base_dir;
191   auto factory = ds::FileSystemDatasetFactory::Make(filesystem, selector, format,
192                                                     ds::FileSystemFactoryOptions())
193                      .ValueOrDie();
194   auto dataset = factory->Finish().ValueOrDie();
195   // Read specified columns with a row filter
196   auto scan_builder = dataset->NewScan().ValueOrDie();
197   ABORT_ON_FAILURE(scan_builder->Project({"b"}));
198   ABORT_ON_FAILURE(scan_builder->Filter(cp::less(cp::field_ref("b"), cp::literal(4))));
199   auto scanner = scan_builder->Finish().ValueOrDie();
200   return scanner->ToTable().ValueOrDie();
201 }

Projecting columns

In addition to selecting columns, arrow::dataset::ScannerBuilder::Project() can also be used for more complex projections, such as renaming columns, casting them to other types, and even deriving new columns based on evaluating expressions.

In this case, we pass a vector of expressions used to construct column values and a vector of names for the columns:

205 // Read a dataset, but with column projection.
206 //
207 // This is useful to derive new columns from existing data. For example, here we
208 // demonstrate casting a column to a different type, and turning a numeric column into a
209 // boolean column based on a predicate. You could also rename columns or perform
210 // computations involving multiple columns.
211 std::shared_ptr<arrow::Table> ProjectDataset(
212     const std::shared_ptr<fs::FileSystem>& filesystem,
213     const std::shared_ptr<ds::FileFormat>& format, const std::string& base_dir) {
214   fs::FileSelector selector;
215   selector.base_dir = base_dir;
216   auto factory = ds::FileSystemDatasetFactory::Make(filesystem, selector, format,
217                                                     ds::FileSystemFactoryOptions())
218                      .ValueOrDie();
219   auto dataset = factory->Finish().ValueOrDie();
220   // Read specified columns with a row filter
221   auto scan_builder = dataset->NewScan().ValueOrDie();
222   ABORT_ON_FAILURE(scan_builder->Project(
223       {
224           // Leave column "a" as-is.
225           cp::field_ref("a"),
226           // Cast column "b" to float32.
227           cp::call("cast", {cp::field_ref("b")},
228                    arrow::compute::CastOptions::Safe(arrow::float32())),
229           // Derive a boolean column from "c".
230           cp::equal(cp::field_ref("c"), cp::literal(1)),
231       },
232       {"a_renamed", "b_as_float32", "c_1"}));
233   auto scanner = scan_builder->Finish().ValueOrDie();
234   return scanner->ToTable().ValueOrDie();
235 }

This also determines the column selection; only the given columns will be present in the resulting table. If you want to include a derived column in addition to the existing columns, you can build up the expressions from the dataset schema:

239 // Read a dataset, but with column projection.
240 //
241 // This time, we read all original columns plus one derived column. This simply combines
242 // the previous two examples: selecting a subset of columns by name, and deriving new
243 // columns with an expression.
244 std::shared_ptr<arrow::Table> SelectAndProjectDataset(
245     const std::shared_ptr<fs::FileSystem>& filesystem,
246     const std::shared_ptr<ds::FileFormat>& format, const std::string& base_dir) {
247   fs::FileSelector selector;
248   selector.base_dir = base_dir;
249   auto factory = ds::FileSystemDatasetFactory::Make(filesystem, selector, format,
250                                                     ds::FileSystemFactoryOptions())
251                      .ValueOrDie();
252   auto dataset = factory->Finish().ValueOrDie();
253   // Read specified columns with a row filter
254   auto scan_builder = dataset->NewScan().ValueOrDie();
255   std::vector<std::string> names;
256   std::vector<cp::Expression> exprs;
257   // Read all the original columns.
258   for (const auto& field : dataset->schema()->fields()) {
259     names.push_back(field->name());
260     exprs.push_back(cp::field_ref(field->name()));
261   }
262   // Also derive a new column.
263   names.emplace_back("b_large");
264   exprs.push_back(cp::greater(cp::field_ref("b"), cp::literal(1)));
265   ABORT_ON_FAILURE(scan_builder->Project(exprs, names));
266   auto scanner = scan_builder->Finish().ValueOrDie();
267   return scanner->ToTable().ValueOrDie();
268 }

Note

When combining filters and projections, Arrow will determine all necessary columns to read. For instance, if you filter on a column that isn’t ultimately selected, Arrow will still read the column to evaluate the filter.

Reading and writing partitioned data

So far, we’ve been working with datasets consisting of flat directories with files. Oftentimes, a dataset will have one or more columns that are frequently filtered on. Instead of having to read and then filter the data, by organizing the files into a nested directory structure, we can define a partitioned dataset, where sub-directory names hold information about which subset of the data is stored in that directory. Then, we can more efficiently filter data by using that information to avoid loading files that don’t match the filter.

For example, a dataset partitioned by year and month may have the following layout:

dataset_name/
  year=2007/
    month=01/
       data0.parquet
       data1.parquet
       ...
    month=02/
       data0.parquet
       data1.parquet
       ...
    month=03/
    ...
  year=2008/
    month=01/
    ...
  ...

The above partitioning scheme is using “/key=value/” directory names, as found in Apache Hive. Under this convention, the file at dataset_name/year=2007/month=01/data0.parquet contains only data for which year == 2007 and month == 01.

Let’s create a small partitioned dataset. For this, we’ll use Arrow’s dataset writing functionality.

112 // Set up a dataset by writing files with partitioning
113 std::string CreateExampleParquetHivePartitionedDataset(
114     const std::shared_ptr<fs::FileSystem>& filesystem, const std::string& root_path) {
115   auto base_path = root_path + "/parquet_dataset";
116   ABORT_ON_FAILURE(filesystem->CreateDir(base_path));
117   // Create an Arrow Table
118   auto schema = arrow::schema(
119       {arrow::field("a", arrow::int64()), arrow::field("b", arrow::int64()),
120        arrow::field("c", arrow::int64()), arrow::field("part", arrow::utf8())});
121   std::vector<std::shared_ptr<arrow::Array>> arrays(4);
122   arrow::NumericBuilder<arrow::Int64Type> builder;
123   ABORT_ON_FAILURE(builder.AppendValues({0, 1, 2, 3, 4, 5, 6, 7, 8, 9}));
124   ABORT_ON_FAILURE(builder.Finish(&arrays[0]));
125   builder.Reset();
126   ABORT_ON_FAILURE(builder.AppendValues({9, 8, 7, 6, 5, 4, 3, 2, 1, 0}));
127   ABORT_ON_FAILURE(builder.Finish(&arrays[1]));
128   builder.Reset();
129   ABORT_ON_FAILURE(builder.AppendValues({1, 2, 1, 2, 1, 2, 1, 2, 1, 2}));
130   ABORT_ON_FAILURE(builder.Finish(&arrays[2]));
131   arrow::StringBuilder string_builder;
132   ABORT_ON_FAILURE(
133       string_builder.AppendValues({"a", "a", "a", "a", "a", "b", "b", "b", "b", "b"}));
134   ABORT_ON_FAILURE(string_builder.Finish(&arrays[3]));
135   auto table = arrow::Table::Make(schema, arrays);
136   // Write it using Datasets
137   auto dataset = std::make_shared<ds::InMemoryDataset>(table);
138   auto scanner_builder = dataset->NewScan().ValueOrDie();
139   auto scanner = scanner_builder->Finish().ValueOrDie();
140 
141   // The partition schema determines which fields are part of the partitioning.
142   auto partition_schema = arrow::schema({arrow::field("part", arrow::utf8())});
143   // We'll use Hive-style partitioning, which creates directories with "key=value" pairs.
144   auto partitioning = std::make_shared<ds::HivePartitioning>(partition_schema);
145   // We'll write Parquet files.
146   auto format = std::make_shared<ds::ParquetFileFormat>();
147   ds::FileSystemDatasetWriteOptions write_options;
148   write_options.file_write_options = format->DefaultWriteOptions();
149   write_options.filesystem = filesystem;
150   write_options.base_dir = base_path;
151   write_options.partitioning = partitioning;
152   write_options.basename_template = "part{i}.parquet";
153   ABORT_ON_FAILURE(ds::FileSystemDataset::Write(write_options, scanner));
154   return base_path;
155 }

The above created a directory with two subdirectories (“part=a” and “part=b”), and the Parquet files written in those directories no longer include the “part” column.

Reading this dataset, we now specify that the dataset should use a Hive-like partitioning scheme:

272 // Read an entire dataset, but with partitioning information.
273 std::shared_ptr<arrow::Table> ScanPartitionedDataset(
274     const std::shared_ptr<fs::FileSystem>& filesystem,
275     const std::shared_ptr<ds::FileFormat>& format, const std::string& base_dir) {
276   fs::FileSelector selector;
277   selector.base_dir = base_dir;
278   selector.recursive = true;  // Make sure to search subdirectories
279   ds::FileSystemFactoryOptions options;
280   // We'll use Hive-style partitioning. We'll let Arrow Datasets infer the partition
281   // schema.
282   options.partitioning = ds::HivePartitioning::MakeFactory();
283   auto factory = ds::FileSystemDatasetFactory::Make(filesystem, selector, format, options)
284                      .ValueOrDie();
285   auto dataset = factory->Finish().ValueOrDie();
286   // Print out the fragments
287   for (const auto& fragment : dataset->GetFragments().ValueOrDie()) {
288     std::cout << "Found fragment: " << (*fragment)->ToString() << std::endl;
289     std::cout << "Partition expression: "
290               << (*fragment)->partition_expression().ToString() << std::endl;
291   }
292   auto scan_builder = dataset->NewScan().ValueOrDie();
293   auto scanner = scan_builder->Finish().ValueOrDie();
294   return scanner->ToTable().ValueOrDie();
295 }

Although the partition fields are not included in the actual Parquet files, they will be added back to the resulting table when scanning this dataset:

$ ./debug/dataset_documentation_example file:///tmp parquet_hive partitioned
Found fragment: /tmp/parquet_dataset/part=a/part0.parquet
Partition expression: (part == "a")
Found fragment: /tmp/parquet_dataset/part=b/part1.parquet
Partition expression: (part == "b")
Read 20 rows
a: int64
  -- field metadata --
  PARQUET:field_id: '1'
b: double
  -- field metadata --
  PARQUET:field_id: '2'
c: int64
  -- field metadata --
  PARQUET:field_id: '3'
part: string
----
# snip...

We can now filter on the partition keys, which avoids loading files altogether if they do not match the filter:

299 // Read an entire dataset, but with partitioning information. Also, filter the dataset on
300 // the partition values.
301 std::shared_ptr<arrow::Table> FilterPartitionedDataset(
302     const std::shared_ptr<fs::FileSystem>& filesystem,
303     const std::shared_ptr<ds::FileFormat>& format, const std::string& base_dir) {
304   fs::FileSelector selector;
305   selector.base_dir = base_dir;
306   selector.recursive = true;
307   ds::FileSystemFactoryOptions options;
308   options.partitioning = ds::HivePartitioning::MakeFactory();
309   auto factory = ds::FileSystemDatasetFactory::Make(filesystem, selector, format, options)
310                      .ValueOrDie();
311   auto dataset = factory->Finish().ValueOrDie();
312   auto scan_builder = dataset->NewScan().ValueOrDie();
313   // Filter based on the partition values. This will mean that we won't even read the
314   // files whose partition expressions don't match the filter.
315   ABORT_ON_FAILURE(
316       scan_builder->Filter(cp::equal(cp::field_ref("part"), cp::literal("b"))));
317   auto scanner = scan_builder->Finish().ValueOrDie();
318   return scanner->ToTable().ValueOrDie();
319 }

Different partitioning schemes

The above example uses a Hive-like directory scheme, such as “/year=2009/month=11/day=15”. We specified this by passing the Hive partitioning factory. In this case, the types of the partition keys are inferred from the file paths.

It is also possible to directly construct the partitioning and explicitly define the schema of the partition keys. For example:

auto part = std::make_shared<ds::HivePartitioning>(arrow::schema({
    arrow::field("year", arrow::int16()),
    arrow::field("month", arrow::int8()),
    arrow::field("day", arrow::int32())
}));

Arrow supports another partitioning scheme, “directory partitioning”, where the segments in the file path represent the values of the partition keys without including the name (the field names are implicit in the segment’s index). For example, given field names “year”, “month”, and “day”, one path might be “/2019/11/15”.

Since the names are not included in the file paths, these must be specified when constructing a directory partitioning:

auto part = ds::DirectoryPartitioning::MakeFactory({"year", "month", "day"});

Directory partitioning also supports providing a full schema rather than inferring types from file paths.

Partitioning performance considerations

Partitioning datasets has two aspects that affect performance: it increases the number of files and it creates a directory structure around the files. Both of these have benefits as well as costs. Depending on the configuration and the size of your dataset, the costs can outweigh the benefits.

Because partitions split up the dataset into multiple files, partitioned datasets can be read and written with parallelism. However, each additional file adds a little overhead in processing for filesystem interaction. It also increases the overall dataset size since each file has some shared metadata. For example, each parquet file contains the schema and group-level statistics. The number of partitions is a floor for the number of files. If you partition a dataset by date with a year of data, you will have at least 365 files. If you further partition by another dimension with 1,000 unique values, you will have up to 365,000 files. This fine of partitioning often leads to small files that mostly consist of metadata.

Partitioned datasets create nested folder structures, and those allow us to prune which files are loaded in a scan. However, this adds overhead to discovering files in the dataset, as we’ll need to recursively “list directory” to find the data files. Too fine partitions can cause problems here: Partitioning a dataset by date for a years worth of data will require 365 list calls to find all the files; adding another column with cardinality 1,000 will make that 365,365 calls.

The most optimal partitioning layout will depend on your data, access patterns, and which systems will be reading the data. Most systems, including Arrow, should work across a range of file sizes and partitioning layouts, but there are extremes you should avoid. These guidelines can help avoid some known worst cases:

  • Avoid files smaller than 20MB and larger than 2GB.

  • Avoid partitioning layouts with more than 10,000 distinct partitions.

For file formats that have a notion of groups within a file, such as Parquet, similar guidelines apply. Row groups can provide parallelism when reading and allow data skipping based on statistics, but very small groups can cause metadata to be a significant portion of file size. Arrow’s file writer provides sensible defaults for group sizing in most cases.

Reading from other data sources

Reading in-memory data

If you already have data in memory that you’d like to use with the Datasets API (e.g. to filter/project data, or to write it out to a filesystem), you can wrap it in an arrow::dataset::InMemoryDataset:

auto table = arrow::Table::FromRecordBatches(...);
auto dataset = std::make_shared<arrow::dataset::InMemoryDataset>(std::move(table));
// Scan the dataset, filter, it, etc.
auto scanner_builder = dataset->NewScan();

In the example, we used the InMemoryDataset to write our example data to local disk which was used in the rest of the example:

112 // Set up a dataset by writing files with partitioning
113 std::string CreateExampleParquetHivePartitionedDataset(
114     const std::shared_ptr<fs::FileSystem>& filesystem, const std::string& root_path) {
115   auto base_path = root_path + "/parquet_dataset";
116   ABORT_ON_FAILURE(filesystem->CreateDir(base_path));
117   // Create an Arrow Table
118   auto schema = arrow::schema(
119       {arrow::field("a", arrow::int64()), arrow::field("b", arrow::int64()),
120        arrow::field("c", arrow::int64()), arrow::field("part", arrow::utf8())});
121   std::vector<std::shared_ptr<arrow::Array>> arrays(4);
122   arrow::NumericBuilder<arrow::Int64Type> builder;
123   ABORT_ON_FAILURE(builder.AppendValues({0, 1, 2, 3, 4, 5, 6, 7, 8, 9}));
124   ABORT_ON_FAILURE(builder.Finish(&arrays[0]));
125   builder.Reset();
126   ABORT_ON_FAILURE(builder.AppendValues({9, 8, 7, 6, 5, 4, 3, 2, 1, 0}));
127   ABORT_ON_FAILURE(builder.Finish(&arrays[1]));
128   builder.Reset();
129   ABORT_ON_FAILURE(builder.AppendValues({1, 2, 1, 2, 1, 2, 1, 2, 1, 2}));
130   ABORT_ON_FAILURE(builder.Finish(&arrays[2]));
131   arrow::StringBuilder string_builder;
132   ABORT_ON_FAILURE(
133       string_builder.AppendValues({"a", "a", "a", "a", "a", "b", "b", "b", "b", "b"}));
134   ABORT_ON_FAILURE(string_builder.Finish(&arrays[3]));
135   auto table = arrow::Table::Make(schema, arrays);
136   // Write it using Datasets
137   auto dataset = std::make_shared<ds::InMemoryDataset>(table);
138   auto scanner_builder = dataset->NewScan().ValueOrDie();
139   auto scanner = scanner_builder->Finish().ValueOrDie();
140 
141   // The partition schema determines which fields are part of the partitioning.
142   auto partition_schema = arrow::schema({arrow::field("part", arrow::utf8())});
143   // We'll use Hive-style partitioning, which creates directories with "key=value" pairs.
144   auto partitioning = std::make_shared<ds::HivePartitioning>(partition_schema);
145   // We'll write Parquet files.
146   auto format = std::make_shared<ds::ParquetFileFormat>();
147   ds::FileSystemDatasetWriteOptions write_options;
148   write_options.file_write_options = format->DefaultWriteOptions();
149   write_options.filesystem = filesystem;
150   write_options.base_dir = base_path;
151   write_options.partitioning = partitioning;
152   write_options.basename_template = "part{i}.parquet";
153   ABORT_ON_FAILURE(ds::FileSystemDataset::Write(write_options, scanner));
154   return base_path;
155 }

Reading from cloud storage

In addition to local files, Arrow Datasets also support reading from cloud storage systems, such as Amazon S3, by passing a different filesystem.

See the filesystem docs for more details on the available filesystems.

A note on transactions & ACID guarantees

The dataset API offers no transaction support or any ACID guarantees. This affects both reading and writing. Concurrent reads are fine. Concurrent writes or writes concurring with reads may have unexpected behavior. Various approaches can be used to avoid operating on the same files such as using a unique basename template for each writer, a temporary directory for new files, or separate storage of the file list instead of relying on directory discovery.

Unexpectedly killing the process while a write is in progress can leave the system in an inconsistent state. Write calls generally return as soon as the bytes to be written have been completely delivered to the OS page cache. Even though a write operation has been completed it is possible for part of the file to be lost if there is a sudden power loss immediately after the write call.

Most file formats have magic numbers which are written at the end. This means a partial file write can safely be detected and discarded. The CSV file format does not have any such concept and a partially written CSV file may be detected as valid.

Full Example

  1 // Licensed to the Apache Software Foundation (ASF) under one
  2 // or more contributor license agreements. See the NOTICE file
  3 // distributed with this work for additional information
  4 // regarding copyright ownership. The ASF licenses this file
  5 // to you under the Apache License, Version 2.0 (the
  6 // "License"); you may not use this file except in compliance
  7 // with the License. You may obtain a copy of the License at
  8 //
  9 // http://www.apache.org/licenses/LICENSE-2.0
 10 //
 11 // Unless required by applicable law or agreed to in writing,
 12 // software distributed under the License is distributed on an
 13 // "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
 14 // KIND, either express or implied. See the License for the
 15 // specific language governing permissions and limitations
 16 // under the License.
 17 
 18 // This example showcases various ways to work with Datasets. It's
 19 // intended to be paired with the documentation.
 20 
 21 #include <arrow/api.h>
 22 #include <arrow/compute/cast.h>
 23 #include <arrow/compute/exec/expression.h>
 24 #include <arrow/dataset/dataset.h>
 25 #include <arrow/dataset/discovery.h>
 26 #include <arrow/dataset/file_base.h>
 27 #include <arrow/dataset/file_ipc.h>
 28 #include <arrow/dataset/file_parquet.h>
 29 #include <arrow/dataset/scanner.h>
 30 #include <arrow/filesystem/filesystem.h>
 31 #include <arrow/ipc/writer.h>
 32 #include <arrow/util/iterator.h>
 33 #include <parquet/arrow/writer.h>
 34 
 35 #include <iostream>
 36 #include <vector>
 37 
 38 namespace ds = arrow::dataset;
 39 namespace fs = arrow::fs;
 40 namespace cp = arrow::compute;
 41 
 42 #define ABORT_ON_FAILURE(expr)                     \
 43   do {                                             \
 44     arrow::Status status_ = (expr);                \
 45     if (!status_.ok()) {                           \
 46       std::cerr << status_.message() << std::endl; \
 47       abort();                                     \
 48     }                                              \
 49   } while (0);
 50 
 51 // (Doc section: Reading Datasets)
 52 // Generate some data for the rest of this example.
 53 std::shared_ptr<arrow::Table> CreateTable() {
 54   auto schema =
 55       arrow::schema({arrow::field("a", arrow::int64()), arrow::field("b", arrow::int64()),
 56                      arrow::field("c", arrow::int64())});
 57   std::shared_ptr<arrow::Array> array_a;
 58   std::shared_ptr<arrow::Array> array_b;
 59   std::shared_ptr<arrow::Array> array_c;
 60   arrow::NumericBuilder<arrow::Int64Type> builder;
 61   ABORT_ON_FAILURE(builder.AppendValues({0, 1, 2, 3, 4, 5, 6, 7, 8, 9}));
 62   ABORT_ON_FAILURE(builder.Finish(&array_a));
 63   builder.Reset();
 64   ABORT_ON_FAILURE(builder.AppendValues({9, 8, 7, 6, 5, 4, 3, 2, 1, 0}));
 65   ABORT_ON_FAILURE(builder.Finish(&array_b));
 66   builder.Reset();
 67   ABORT_ON_FAILURE(builder.AppendValues({1, 2, 1, 2, 1, 2, 1, 2, 1, 2}));
 68   ABORT_ON_FAILURE(builder.Finish(&array_c));
 69   return arrow::Table::Make(schema, {array_a, array_b, array_c});
 70 }
 71 
 72 // Set up a dataset by writing two Parquet files.
 73 std::string CreateExampleParquetDataset(const std::shared_ptr<fs::FileSystem>& filesystem,
 74                                         const std::string& root_path) {
 75   auto base_path = root_path + "/parquet_dataset";
 76   ABORT_ON_FAILURE(filesystem->CreateDir(base_path));
 77   // Create an Arrow Table
 78   auto table = CreateTable();
 79   // Write it into two Parquet files
 80   auto output = filesystem->OpenOutputStream(base_path + "/data1.parquet").ValueOrDie();
 81   ABORT_ON_FAILURE(parquet::arrow::WriteTable(
 82       *table->Slice(0, 5), arrow::default_memory_pool(), output, /*chunk_size=*/2048));
 83   output = filesystem->OpenOutputStream(base_path + "/data2.parquet").ValueOrDie();
 84   ABORT_ON_FAILURE(parquet::arrow::WriteTable(
 85       *table->Slice(5), arrow::default_memory_pool(), output, /*chunk_size=*/2048));
 86   return base_path;
 87 }
 88 // (Doc section: Reading Datasets)
 89 
 90 // (Doc section: Reading different file formats)
 91 // Set up a dataset by writing two Feather files.
 92 std::string CreateExampleFeatherDataset(const std::shared_ptr<fs::FileSystem>& filesystem,
 93                                         const std::string& root_path) {
 94   auto base_path = root_path + "/feather_dataset";
 95   ABORT_ON_FAILURE(filesystem->CreateDir(base_path));
 96   // Create an Arrow Table
 97   auto table = CreateTable();
 98   // Write it into two Feather files
 99   auto output = filesystem->OpenOutputStream(base_path + "/data1.feather").ValueOrDie();
100   auto writer = arrow::ipc::MakeFileWriter(output.get(), table->schema()).ValueOrDie();
101   ABORT_ON_FAILURE(writer->WriteTable(*table->Slice(0, 5)));
102   ABORT_ON_FAILURE(writer->Close());
103   output = filesystem->OpenOutputStream(base_path + "/data2.feather").ValueOrDie();
104   writer = arrow::ipc::MakeFileWriter(output.get(), table->schema()).ValueOrDie();
105   ABORT_ON_FAILURE(writer->WriteTable(*table->Slice(5)));
106   ABORT_ON_FAILURE(writer->Close());
107   return base_path;
108 }
109 // (Doc section: Reading different file formats)
110 
111 // (Doc section: Reading and writing partitioned data)
112 // Set up a dataset by writing files with partitioning
113 std::string CreateExampleParquetHivePartitionedDataset(
114     const std::shared_ptr<fs::FileSystem>& filesystem, const std::string& root_path) {
115   auto base_path = root_path + "/parquet_dataset";
116   ABORT_ON_FAILURE(filesystem->CreateDir(base_path));
117   // Create an Arrow Table
118   auto schema = arrow::schema(
119       {arrow::field("a", arrow::int64()), arrow::field("b", arrow::int64()),
120        arrow::field("c", arrow::int64()), arrow::field("part", arrow::utf8())});
121   std::vector<std::shared_ptr<arrow::Array>> arrays(4);
122   arrow::NumericBuilder<arrow::Int64Type> builder;
123   ABORT_ON_FAILURE(builder.AppendValues({0, 1, 2, 3, 4, 5, 6, 7, 8, 9}));
124   ABORT_ON_FAILURE(builder.Finish(&arrays[0]));
125   builder.Reset();
126   ABORT_ON_FAILURE(builder.AppendValues({9, 8, 7, 6, 5, 4, 3, 2, 1, 0}));
127   ABORT_ON_FAILURE(builder.Finish(&arrays[1]));
128   builder.Reset();
129   ABORT_ON_FAILURE(builder.AppendValues({1, 2, 1, 2, 1, 2, 1, 2, 1, 2}));
130   ABORT_ON_FAILURE(builder.Finish(&arrays[2]));
131   arrow::StringBuilder string_builder;
132   ABORT_ON_FAILURE(
133       string_builder.AppendValues({"a", "a", "a", "a", "a", "b", "b", "b", "b", "b"}));
134   ABORT_ON_FAILURE(string_builder.Finish(&arrays[3]));
135   auto table = arrow::Table::Make(schema, arrays);
136   // Write it using Datasets
137   auto dataset = std::make_shared<ds::InMemoryDataset>(table);
138   auto scanner_builder = dataset->NewScan().ValueOrDie();
139   auto scanner = scanner_builder->Finish().ValueOrDie();
140 
141   // The partition schema determines which fields are part of the partitioning.
142   auto partition_schema = arrow::schema({arrow::field("part", arrow::utf8())});
143   // We'll use Hive-style partitioning, which creates directories with "key=value" pairs.
144   auto partitioning = std::make_shared<ds::HivePartitioning>(partition_schema);
145   // We'll write Parquet files.
146   auto format = std::make_shared<ds::ParquetFileFormat>();
147   ds::FileSystemDatasetWriteOptions write_options;
148   write_options.file_write_options = format->DefaultWriteOptions();
149   write_options.filesystem = filesystem;
150   write_options.base_dir = base_path;
151   write_options.partitioning = partitioning;
152   write_options.basename_template = "part{i}.parquet";
153   ABORT_ON_FAILURE(ds::FileSystemDataset::Write(write_options, scanner));
154   return base_path;
155 }
156 // (Doc section: Reading and writing partitioned data)
157 
158 // (Doc section: Dataset discovery)
159 // Read the whole dataset with the given format, without partitioning.
160 std::shared_ptr<arrow::Table> ScanWholeDataset(
161     const std::shared_ptr<fs::FileSystem>& filesystem,
162     const std::shared_ptr<ds::FileFormat>& format, const std::string& base_dir) {
163   // Create a dataset by scanning the filesystem for files
164   fs::FileSelector selector;
165   selector.base_dir = base_dir;
166   auto factory = ds::FileSystemDatasetFactory::Make(filesystem, selector, format,
167                                                     ds::FileSystemFactoryOptions())
168                      .ValueOrDie();
169   auto dataset = factory->Finish().ValueOrDie();
170   // Print out the fragments
171   for (const auto& fragment : dataset->GetFragments().ValueOrDie()) {
172     std::cout << "Found fragment: " << (*fragment)->ToString() << std::endl;
173   }
174   // Read the entire dataset as a Table
175   auto scan_builder = dataset->NewScan().ValueOrDie();
176   auto scanner = scan_builder->Finish().ValueOrDie();
177   return scanner->ToTable().ValueOrDie();
178 }
179 // (Doc section: Dataset discovery)
180 
181 // (Doc section: Filtering data)
182 // Read a dataset, but select only column "b" and only rows where b < 4.
183 //
184 // This is useful when you only want a few columns from a dataset. Where possible,
185 // Datasets will push down the column selection such that less work is done.
186 std::shared_ptr<arrow::Table> FilterAndSelectDataset(
187     const std::shared_ptr<fs::FileSystem>& filesystem,
188     const std::shared_ptr<ds::FileFormat>& format, const std::string& base_dir) {
189   fs::FileSelector selector;
190   selector.base_dir = base_dir;
191   auto factory = ds::FileSystemDatasetFactory::Make(filesystem, selector, format,
192                                                     ds::FileSystemFactoryOptions())
193                      .ValueOrDie();
194   auto dataset = factory->Finish().ValueOrDie();
195   // Read specified columns with a row filter
196   auto scan_builder = dataset->NewScan().ValueOrDie();
197   ABORT_ON_FAILURE(scan_builder->Project({"b"}));
198   ABORT_ON_FAILURE(scan_builder->Filter(cp::less(cp::field_ref("b"), cp::literal(4))));
199   auto scanner = scan_builder->Finish().ValueOrDie();
200   return scanner->ToTable().ValueOrDie();
201 }
202 // (Doc section: Filtering data)
203 
204 // (Doc section: Projecting columns)
205 // Read a dataset, but with column projection.
206 //
207 // This is useful to derive new columns from existing data. For example, here we
208 // demonstrate casting a column to a different type, and turning a numeric column into a
209 // boolean column based on a predicate. You could also rename columns or perform
210 // computations involving multiple columns.
211 std::shared_ptr<arrow::Table> ProjectDataset(
212     const std::shared_ptr<fs::FileSystem>& filesystem,
213     const std::shared_ptr<ds::FileFormat>& format, const std::string& base_dir) {
214   fs::FileSelector selector;
215   selector.base_dir = base_dir;
216   auto factory = ds::FileSystemDatasetFactory::Make(filesystem, selector, format,
217                                                     ds::FileSystemFactoryOptions())
218                      .ValueOrDie();
219   auto dataset = factory->Finish().ValueOrDie();
220   // Read specified columns with a row filter
221   auto scan_builder = dataset->NewScan().ValueOrDie();
222   ABORT_ON_FAILURE(scan_builder->Project(
223       {
224           // Leave column "a" as-is.
225           cp::field_ref("a"),
226           // Cast column "b" to float32.
227           cp::call("cast", {cp::field_ref("b")},
228                    arrow::compute::CastOptions::Safe(arrow::float32())),
229           // Derive a boolean column from "c".
230           cp::equal(cp::field_ref("c"), cp::literal(1)),
231       },
232       {"a_renamed", "b_as_float32", "c_1"}));
233   auto scanner = scan_builder->Finish().ValueOrDie();
234   return scanner->ToTable().ValueOrDie();
235 }
236 // (Doc section: Projecting columns)
237 
238 // (Doc section: Projecting columns #2)
239 // Read a dataset, but with column projection.
240 //
241 // This time, we read all original columns plus one derived column. This simply combines
242 // the previous two examples: selecting a subset of columns by name, and deriving new
243 // columns with an expression.
244 std::shared_ptr<arrow::Table> SelectAndProjectDataset(
245     const std::shared_ptr<fs::FileSystem>& filesystem,
246     const std::shared_ptr<ds::FileFormat>& format, const std::string& base_dir) {
247   fs::FileSelector selector;
248   selector.base_dir = base_dir;
249   auto factory = ds::FileSystemDatasetFactory::Make(filesystem, selector, format,
250                                                     ds::FileSystemFactoryOptions())
251                      .ValueOrDie();
252   auto dataset = factory->Finish().ValueOrDie();
253   // Read specified columns with a row filter
254   auto scan_builder = dataset->NewScan().ValueOrDie();
255   std::vector<std::string> names;
256   std::vector<cp::Expression> exprs;
257   // Read all the original columns.
258   for (const auto& field : dataset->schema()->fields()) {
259     names.push_back(field->name());
260     exprs.push_back(cp::field_ref(field->name()));
261   }
262   // Also derive a new column.
263   names.emplace_back("b_large");
264   exprs.push_back(cp::greater(cp::field_ref("b"), cp::literal(1)));
265   ABORT_ON_FAILURE(scan_builder->Project(exprs, names));
266   auto scanner = scan_builder->Finish().ValueOrDie();
267   return scanner->ToTable().ValueOrDie();
268 }
269 // (Doc section: Projecting columns #2)
270 
271 // (Doc section: Reading and writing partitioned data #2)
272 // Read an entire dataset, but with partitioning information.
273 std::shared_ptr<arrow::Table> ScanPartitionedDataset(
274     const std::shared_ptr<fs::FileSystem>& filesystem,
275     const std::shared_ptr<ds::FileFormat>& format, const std::string& base_dir) {
276   fs::FileSelector selector;
277   selector.base_dir = base_dir;
278   selector.recursive = true;  // Make sure to search subdirectories
279   ds::FileSystemFactoryOptions options;
280   // We'll use Hive-style partitioning. We'll let Arrow Datasets infer the partition
281   // schema.
282   options.partitioning = ds::HivePartitioning::MakeFactory();
283   auto factory = ds::FileSystemDatasetFactory::Make(filesystem, selector, format, options)
284                      .ValueOrDie();
285   auto dataset = factory->Finish().ValueOrDie();
286   // Print out the fragments
287   for (const auto& fragment : dataset->GetFragments().ValueOrDie()) {
288     std::cout << "Found fragment: " << (*fragment)->ToString() << std::endl;
289     std::cout << "Partition expression: "
290               << (*fragment)->partition_expression().ToString() << std::endl;
291   }
292   auto scan_builder = dataset->NewScan().ValueOrDie();
293   auto scanner = scan_builder->Finish().ValueOrDie();
294   return scanner->ToTable().ValueOrDie();
295 }
296 // (Doc section: Reading and writing partitioned data #2)
297 
298 // (Doc section: Reading and writing partitioned data #3)
299 // Read an entire dataset, but with partitioning information. Also, filter the dataset on
300 // the partition values.
301 std::shared_ptr<arrow::Table> FilterPartitionedDataset(
302     const std::shared_ptr<fs::FileSystem>& filesystem,
303     const std::shared_ptr<ds::FileFormat>& format, const std::string& base_dir) {
304   fs::FileSelector selector;
305   selector.base_dir = base_dir;
306   selector.recursive = true;
307   ds::FileSystemFactoryOptions options;
308   options.partitioning = ds::HivePartitioning::MakeFactory();
309   auto factory = ds::FileSystemDatasetFactory::Make(filesystem, selector, format, options)
310                      .ValueOrDie();
311   auto dataset = factory->Finish().ValueOrDie();
312   auto scan_builder = dataset->NewScan().ValueOrDie();
313   // Filter based on the partition values. This will mean that we won't even read the
314   // files whose partition expressions don't match the filter.
315   ABORT_ON_FAILURE(
316       scan_builder->Filter(cp::equal(cp::field_ref("part"), cp::literal("b"))));
317   auto scanner = scan_builder->Finish().ValueOrDie();
318   return scanner->ToTable().ValueOrDie();
319 }
320 // (Doc section: Reading and writing partitioned data #3)
321 
322 int main(int argc, char** argv) {
323   if (argc < 3) {
324     // Fake success for CI purposes.
325     return EXIT_SUCCESS;
326   }
327 
328   std::string uri = argv[1];
329   std::string format_name = argv[2];
330   std::string mode = argc > 3 ? argv[3] : "no_filter";
331   std::string root_path;
332   auto fs = fs::FileSystemFromUri(uri, &root_path).ValueOrDie();
333 
334   std::string base_path;
335   std::shared_ptr<ds::FileFormat> format;
336   if (format_name == "feather") {
337     format = std::make_shared<ds::IpcFileFormat>();
338     base_path = CreateExampleFeatherDataset(fs, root_path);
339   } else if (format_name == "parquet") {
340     format = std::make_shared<ds::ParquetFileFormat>();
341     base_path = CreateExampleParquetDataset(fs, root_path);
342   } else if (format_name == "parquet_hive") {
343     format = std::make_shared<ds::ParquetFileFormat>();
344     base_path = CreateExampleParquetHivePartitionedDataset(fs, root_path);
345   } else {
346     std::cerr << "Unknown format: " << format_name << std::endl;
347     std::cerr << "Supported formats: feather, parquet, parquet_hive" << std::endl;
348     return EXIT_FAILURE;
349   }
350 
351   std::shared_ptr<arrow::Table> table;
352   if (mode == "no_filter") {
353     table = ScanWholeDataset(fs, format, base_path);
354   } else if (mode == "filter") {
355     table = FilterAndSelectDataset(fs, format, base_path);
356   } else if (mode == "project") {
357     table = ProjectDataset(fs, format, base_path);
358   } else if (mode == "select_project") {
359     table = SelectAndProjectDataset(fs, format, base_path);
360   } else if (mode == "partitioned") {
361     table = ScanPartitionedDataset(fs, format, base_path);
362   } else if (mode == "filter_partitioned") {
363     table = FilterPartitionedDataset(fs, format, base_path);
364   } else {
365     std::cerr << "Unknown mode: " << mode << std::endl;
366     std::cerr
367         << "Supported modes: no_filter, filter, project, select_project, partitioned"
368         << std::endl;
369     return EXIT_FAILURE;
370   }
371   std::cout << "Read " << table->num_rows() << " rows" << std::endl;
372   std::cout << table->ToString() << std::endl;
373   return EXIT_SUCCESS;
374 }