Arrow Datasets#

Arrow C++ provides the concept and implementation of Datasets to work with fragmented data, which can be larger-than-memory, be that due to generating large amounts, reading in from a stream, or having a large file on disk. In this article, you will:

  1. read a multi-file partitioned dataset and put it into a Table,

  2. write out a partitioned dataset from a Table.

Pre-requisites#

Before continuing, make sure you have:

  1. An Arrow installation, which you can set up here: Using Arrow C++ in your own project

  2. An understanding of basic Arrow data structures from Basic Arrow Data Structures

To witness the differences, it may be useful to have also read the Arrow File I/O. However, it is not required.

Setup#

Before running some computations, we need to fill in a couple gaps:

  1. We need to include necessary headers.

  2. A main() is needed to glue things together.

  3. We need data on disk to play with.

Includes#

Before writing C++ code, we need some includes. We’ll get iostream for output, then import Arrow’s compute functionality for each file type we’ll work with in this article:

#include <arrow/api.h>
#include <arrow/dataset/api.h>
// We use Parquet headers for setting up examples; they are not required for using
// datasets.
#include <parquet/arrow/reader.h>
#include <parquet/arrow/writer.h>

#include <unistd.h>
#include <iostream>

Main()#

For our glue, we’ll use the main() pattern from the previous tutorial on data structures:

int main() {
  arrow::Status st = RunMain();
  if (!st.ok()) {
    std::cerr << st << std::endl;
    return 1;
  }
  return 0;
}

Which, like when we used it before, is paired with a RunMain():

arrow::Status RunMain() {

Generating Files for Reading#

We need some files to actually play with. In practice, you’ll likely have some input for your own application. Here, however, we want to explore without the overhead of supplying or finding a dataset, so let’s generate some to make this easy to follow. Feel free to read through this, but the concepts will be visited properly in this article – just copy it in, for now, and realize it ends with a partitioned dataset on disk:

// Generate some data for the rest of this example.
arrow::Result<std::shared_ptr<arrow::Table>> CreateTable() {
  // This code should look familiar from the basic Arrow example, and is not the
  // focus of this example. However, we need data to work on it, and this makes that!
  auto schema =
      arrow::schema({arrow::field("a", arrow::int64()), arrow::field("b", arrow::int64()),
                     arrow::field("c", arrow::int64())});
  std::shared_ptr<arrow::Array> array_a;
  std::shared_ptr<arrow::Array> array_b;
  std::shared_ptr<arrow::Array> array_c;
  arrow::NumericBuilder<arrow::Int64Type> builder;
  ARROW_RETURN_NOT_OK(builder.AppendValues({0, 1, 2, 3, 4, 5, 6, 7, 8, 9}));
  ARROW_RETURN_NOT_OK(builder.Finish(&array_a));
  builder.Reset();
  ARROW_RETURN_NOT_OK(builder.AppendValues({9, 8, 7, 6, 5, 4, 3, 2, 1, 0}));
  ARROW_RETURN_NOT_OK(builder.Finish(&array_b));
  builder.Reset();
  ARROW_RETURN_NOT_OK(builder.AppendValues({1, 2, 1, 2, 1, 2, 1, 2, 1, 2}));
  ARROW_RETURN_NOT_OK(builder.Finish(&array_c));
  return arrow::Table::Make(schema, {array_a, array_b, array_c});
}

// Set up a dataset by writing two Parquet files.
arrow::Result<std::string> CreateExampleParquetDataset(
    const std::shared_ptr<arrow::fs::FileSystem>& filesystem,
    const std::string& root_path) {
  // Much like CreateTable(), this is utility that gets us the dataset we'll be reading
  // from. Don't worry, we also write a dataset in the example proper.
  auto base_path = root_path + "parquet_dataset";
  ARROW_RETURN_NOT_OK(filesystem->CreateDir(base_path));
  // Create an Arrow Table
  ARROW_ASSIGN_OR_RAISE(auto table, CreateTable());
  // Write it into two Parquet files
  ARROW_ASSIGN_OR_RAISE(auto output,
                        filesystem->OpenOutputStream(base_path + "/data1.parquet"));
  ARROW_RETURN_NOT_OK(parquet::arrow::WriteTable(
      *table->Slice(0, 5), arrow::default_memory_pool(), output, 2048));
  ARROW_ASSIGN_OR_RAISE(output,
                        filesystem->OpenOutputStream(base_path + "/data2.parquet"));
  ARROW_RETURN_NOT_OK(parquet::arrow::WriteTable(
      *table->Slice(5), arrow::default_memory_pool(), output, 2048));
  return base_path;
}

arrow::Status PrepareEnv() {
  // Get our environment prepared for reading, by setting up some quick writing.
  ARROW_ASSIGN_OR_RAISE(auto src_table, CreateTable())
  std::shared_ptr<arrow::fs::FileSystem> setup_fs;
  // Note this operates in the directory the executable is built in.
  char setup_path[256];
  char* result = getcwd(setup_path, 256);
  if (result == NULL) {
    return arrow::Status::IOError("Fetching PWD failed.");
  }

  ARROW_ASSIGN_OR_RAISE(setup_fs, arrow::fs::FileSystemFromUriOrPath(setup_path));
  ARROW_ASSIGN_OR_RAISE(auto dset_path, CreateExampleParquetDataset(setup_fs, ""));

  return arrow::Status::OK();
}

In order to actually have these files, make sure the first thing called in RunMain() is our helper function PrepareEnv(), which will get a dataset on disk for us to play with:

  ARROW_RETURN_NOT_OK(PrepareEnv());

Reading a Partitioned Dataset#

Reading a Dataset is a distinct task from reading a single file. The task takes more work than reading a single file, due to needing to be able to parse multiple files and/or folders. This process can be broken up into the following steps:

  1. Getting a fs::FileSystem object for the local FS

  2. Create a fs::FileSelector and use it to prepare a dataset::FileSystemDatasetFactory

  3. Build a dataset::Dataset using the dataset::FileSystemDatasetFactory

  4. Use a dataset::Scanner to read into a Table

Preparing a FileSystem Object#

In order to begin, we’ll need to be able to interact with the local filesystem. In order to do that, we’ll need an fs::FileSystem object. A fs::FileSystem is an abstraction that lets us use the same interface regardless of using Amazon S3, Google Cloud Storage, or local disk – and we’ll be using local disk. So, let’s declare it:

  // First, we need a filesystem object, which lets us interact with our local
  // filesystem starting at a given path. For the sake of simplicity, that'll be
  // the current directory.
  std::shared_ptr<arrow::fs::FileSystem> fs;

For this example, we’ll have our FileSystem’s base path exist in the same directory as the executable. fs::FileSystemFromUriOrPath() lets us get a fs::FileSystem object for any of the types of supported filesystems. Here, though, we’ll just pass our path:

  // Get the CWD, use it to make the FileSystem object.
  char init_path[256];
  char* result = getcwd(init_path, 256);
  if (result == NULL) {
    return arrow::Status::IOError("Fetching PWD failed.");
  }
  ARROW_ASSIGN_OR_RAISE(fs, arrow::fs::FileSystemFromUriOrPath(init_path));

See also

fs::FileSystem for the other supported filesystems.

Creating a FileSystemDatasetFactory#

A fs::FileSystem stores a lot of metadata, but we need to be able to traverse it and parse that metadata. In Arrow, we use a FileSelector to do so:

  // A file selector lets us actually traverse a multi-file dataset.
  arrow::fs::FileSelector selector;

This fs::FileSelector isn’t able to do anything yet. In order to use it, we need to configure it – we’ll have it start any selection in “parquet_dataset,” which is where the environment preparation process has left us a dataset, and set recursive to true, which allows for traversal of folders.

  selector.base_dir = "parquet_dataset";
  // Recursive is a safe bet if you don't know the nesting of your dataset.
  selector.recursive = true;

To get a dataset::Dataset from a fs::FileSystem, we need to prepare a dataset::FileSystemDatasetFactory. This is a long but descriptive name – it’ll make us a factory to get data from our fs::FileSystem. First, we configure it by filling a dataset::FileSystemFactoryOptions struct:

  // Making an options object lets us configure our dataset reading.
  arrow::dataset::FileSystemFactoryOptions options;
  // We'll use Hive-style partitioning. We'll let Arrow Datasets infer the partition
  // schema. We won't set any other options, defaults are fine.
  options.partitioning = arrow::dataset::HivePartitioning::MakeFactory();

There are many file formats, and we have to pick one that will be expected when actually reading. Parquet is what we have on disk, so of course we’ll ask for that when reading:

  auto read_format = std::make_shared<arrow::dataset::ParquetFileFormat>();

After setting up the fs::FileSystem, fs::FileSelector, options, and file format, we can make that dataset::FileSystemDatasetFactory. This simply requires passing in everything we’ve prepared and assigning that to a variable:

  // Now, we get a factory that will let us get our dataset -- we don't have the
  // dataset yet!
  ARROW_ASSIGN_OR_RAISE(auto factory, arrow::dataset::FileSystemDatasetFactory::Make(
                                          fs, selector, read_format, options));

Build Dataset using Factory#

With a dataset::FileSystemDatasetFactory set up, we can actually build our dataset::Dataset with dataset::FileSystemDatasetFactory::Finish(), just like with an ArrayBuilder back in the basic tutorial:

  // Now we build our dataset from the factory.
  ARROW_ASSIGN_OR_RAISE(auto read_dataset, factory->Finish());

Now, we have a dataset::Dataset object in memory. This does not mean that the entire dataset is manifested in memory, but that we now have access to tools that allow us to explore and use the dataset that is on disk. For example, we can grab the fragments (files) that make up our whole dataset, and print those out, along with some small info:

  // Print out the fragments
  ARROW_ASSIGN_OR_RAISE(auto fragments, read_dataset->GetFragments());
  for (const auto& fragment : fragments) {
    std::cout << "Found fragment: " << (*fragment)->ToString() << std::endl;
    std::cout << "Partition expression: "
              << (*fragment)->partition_expression().ToString() << std::endl;
  }

Move Dataset into Table#

One way we can do something with Datasets is getting them into a Table, where we can do anything we’ve learned we can do to Tables to that Table.

See also

Acero: A C++ streaming execution engine for execution that avoids manifesting the entire dataset in memory.

In order to move a Dataset’s contents into a Table, we need a dataset::Scanner, which scans the data and outputs it to the Table. First, we get a dataset::ScannerBuilder from the dataset::Dataset:

  // Scan dataset into a Table -- once this is done, you can do
  // normal table things with it, like computation and printing. However, now you're
  // also dedicated to being in memory.
  ARROW_ASSIGN_OR_RAISE(auto read_scan_builder, read_dataset->NewScan());

Of course, a Builder’s only use is to get us our dataset::Scanner, so let’s use dataset::ScannerBuilder::Finish():

  ARROW_ASSIGN_OR_RAISE(auto read_scanner, read_scan_builder->Finish());

Now that we have a tool to move through our dataset::Dataset, let’s use it to get our Table. dataset::Scanner::ToTable() offers exactly what we’re looking for, and we can print the results:

  ARROW_ASSIGN_OR_RAISE(std::shared_ptr<arrow::Table> table, read_scanner->ToTable());
  std::cout << table->ToString();

This leaves us with a normal Table. Again, to do things with Datasets without moving to a Table, consider using Acero.

Writing a Dataset to Disk from Table#

Writing a dataset::Dataset is a distinct task from writing a single file. The task takes more work than writing a single file, due to needing to be able to parse handle a partitioning scheme across multiple files and folders. This process can be broken up into the following steps:

  1. Prepare a TableBatchReader

  2. Create a dataset::Scanner to pull data from TableBatchReader

  3. Prepare schema, partitioning, and file format options

  4. Set up dataset::FileSystemDatasetWriteOptions – a struct that configures our writing functions

  5. Write dataset to disk

Prepare Data from Table for Writing#

We have a Table, and we want to get a dataset::Dataset on disk. In fact, for the sake of exploration, we’ll use a different partitioning scheme for the dataset – instead of just breaking into halves like the original fragments, we’ll partition based on each row’s value in the “a” column.

To get started on that, let’s get a TableBatchReader! This makes it very easy to write to a Dataset, and can be used elsewhere whenever a Table needs to be broken into a stream of RecordBatches. Here, we can just use the TableBatchReader’s constructor, with our table:

  // Now, let's get a table out to disk as a dataset!
  // We make a RecordBatchReader from our Table, then set up a scanner, which lets us
  // go to a file.
  std::shared_ptr<arrow::TableBatchReader> write_dataset =
      std::make_shared<arrow::TableBatchReader>(table);

Create Scanner for Moving Table Data#

The process for writing a dataset::Dataset, once a source of data is available, is similar to the reverse of reading it. Before, we used a dataset::Scanner in order to scan into a Table – now, we need one to read out of our TableBatchReader. To get that dataset::Scanner, we’ll make a dataset::ScannerBuilder based on our TableBatchReader, then use that Builder to build a dataset::Scanner:

  auto write_scanner_builder =
      arrow::dataset::ScannerBuilder::FromRecordBatchReader(write_dataset);
  ARROW_ASSIGN_OR_RAISE(auto write_scanner, write_scanner_builder->Finish())

Prepare Schema, Partitioning, and File Format Variables#

Since we want to partition based on the “a” column, we need to declare that. When defining our partitioning Schema, we’ll just have a single Field that contains “a”:

  // The partition schema determines which fields are used as keys for partitioning.
  auto partition_schema = arrow::schema({arrow::field("a", arrow::utf8())});

This Schema determines what the key is for partitioning, but we need to choose the algorithm that’ll do something with this key. We will use Hive-style again, this time with our schema passed to it as configuration:

  // We'll use Hive-style partitioning, which creates directories with "key=value"
  // pairs.
  auto partitioning =
      std::make_shared<arrow::dataset::HivePartitioning>(partition_schema);

Several file formats are available, but Parquet is commonly used with Arrow, so we’ll write back out to that:

  // Now, we declare we'll be writing Parquet files.
  auto write_format = std::make_shared<arrow::dataset::ParquetFileFormat>();

Configure FileSystemDatasetWriteOptions#

In order to write to disk, we need some configuration. We’ll do so via setting values in a dataset::FileSystemDatasetWriteOptions struct. We’ll initialize it with defaults where possible:

  // This time, we make Options for writing, but do much more configuration.
  arrow::dataset::FileSystemDatasetWriteOptions write_options;
  // Defaults to start.
  write_options.file_write_options = write_format->DefaultWriteOptions();

One important step in writing to file is having a fs::FileSystem to target. Luckily, we have one from when we set it up for reading. This is a simple variable assignment:

  // Use the filesystem we already have.
  write_options.filesystem = fs;

Arrow can make the directory, but it does need a name for said directory, so let’s give it one, call it “write_dataset”:

  // Write to the folder "write_dataset" in current directory.
  write_options.base_dir = "write_dataset";

We made a partitioning method previously, declaring that we’d use Hive-style – this is where we actually pass that to our writing function:

  // Use the partitioning declared above.
  write_options.partitioning = partitioning;

Part of what’ll happen is Arrow will break up files, thus preventing them from being too large to handle. This is what makes a dataset fragmented in the first place. In order to set this up, we need a base name for each fragment in a directory – in this case, we’ll have “part{i}.parquet”, which means the third file (within the same directory) will be called “part3.parquet”, for example:

  // Define what the name for the files making up the dataset will be.
  write_options.basename_template = "part{i}.parquet";

Sometimes, data will be written to the same location more than once, and overwriting will be accepted. Since we may want to run this application more than once, we will set Arrow to overwrite existing data – if we didn’t, Arrow would abort due to seeing existing data after the first run of this application:

  // Set behavior to overwrite existing data -- specifically, this lets this example
  // be run more than once, and allows whatever code you have to overwrite what's there.
  write_options.existing_data_behavior =
      arrow::dataset::ExistingDataBehavior::kOverwriteOrIgnore;

Write Dataset to Disk#

Once the dataset::FileSystemDatasetWriteOptions has been configured, and a dataset::Scanner is prepared to parse the data, we can pass the Options and dataset::Scanner to the dataset::FileSystemDataset::Write() to write out to disk:

  // Write to disk!
  ARROW_RETURN_NOT_OK(
      arrow::dataset::FileSystemDataset::Write(write_options, write_scanner));

You can review your disk to see that you’ve written a folder containing subfolders for every value of “a”, which each have Parquet files!

Ending Program#

At the end, we just return Status::OK(), so the main() knows that we’re done, and that everything’s okay, just like the preceding tutorials.

  return arrow::Status::OK();
}

With that, you’ve read and written partitioned datasets! This method, with some configuration, will work for any supported dataset format. For an example of such a dataset, the NYC Taxi dataset is a well-known one, which you can find here. Now you can get larger-than-memory data mapped for use!

Which means that now we have to be able to process this data without pulling it all into memory at once. For this, try Acero.

See also

Acero: A C++ streaming execution engine for more information on Acero.

Refer to the below for a copy of the complete code:

 19// (Doc section: Includes)
 20#include <arrow/api.h>
 21#include <arrow/dataset/api.h>
 22// We use Parquet headers for setting up examples; they are not required for using
 23// datasets.
 24#include <parquet/arrow/reader.h>
 25#include <parquet/arrow/writer.h>
 26
 27#include <unistd.h>
 28#include <iostream>
 29// (Doc section: Includes)
 30
 31// (Doc section: Helper Functions)
 32// Generate some data for the rest of this example.
 33arrow::Result<std::shared_ptr<arrow::Table>> CreateTable() {
 34  // This code should look familiar from the basic Arrow example, and is not the
 35  // focus of this example. However, we need data to work on it, and this makes that!
 36  auto schema =
 37      arrow::schema({arrow::field("a", arrow::int64()), arrow::field("b", arrow::int64()),
 38                     arrow::field("c", arrow::int64())});
 39  std::shared_ptr<arrow::Array> array_a;
 40  std::shared_ptr<arrow::Array> array_b;
 41  std::shared_ptr<arrow::Array> array_c;
 42  arrow::NumericBuilder<arrow::Int64Type> builder;
 43  ARROW_RETURN_NOT_OK(builder.AppendValues({0, 1, 2, 3, 4, 5, 6, 7, 8, 9}));
 44  ARROW_RETURN_NOT_OK(builder.Finish(&array_a));
 45  builder.Reset();
 46  ARROW_RETURN_NOT_OK(builder.AppendValues({9, 8, 7, 6, 5, 4, 3, 2, 1, 0}));
 47  ARROW_RETURN_NOT_OK(builder.Finish(&array_b));
 48  builder.Reset();
 49  ARROW_RETURN_NOT_OK(builder.AppendValues({1, 2, 1, 2, 1, 2, 1, 2, 1, 2}));
 50  ARROW_RETURN_NOT_OK(builder.Finish(&array_c));
 51  return arrow::Table::Make(schema, {array_a, array_b, array_c});
 52}
 53
 54// Set up a dataset by writing two Parquet files.
 55arrow::Result<std::string> CreateExampleParquetDataset(
 56    const std::shared_ptr<arrow::fs::FileSystem>& filesystem,
 57    const std::string& root_path) {
 58  // Much like CreateTable(), this is utility that gets us the dataset we'll be reading
 59  // from. Don't worry, we also write a dataset in the example proper.
 60  auto base_path = root_path + "parquet_dataset";
 61  ARROW_RETURN_NOT_OK(filesystem->CreateDir(base_path));
 62  // Create an Arrow Table
 63  ARROW_ASSIGN_OR_RAISE(auto table, CreateTable());
 64  // Write it into two Parquet files
 65  ARROW_ASSIGN_OR_RAISE(auto output,
 66                        filesystem->OpenOutputStream(base_path + "/data1.parquet"));
 67  ARROW_RETURN_NOT_OK(parquet::arrow::WriteTable(
 68      *table->Slice(0, 5), arrow::default_memory_pool(), output, 2048));
 69  ARROW_ASSIGN_OR_RAISE(output,
 70                        filesystem->OpenOutputStream(base_path + "/data2.parquet"));
 71  ARROW_RETURN_NOT_OK(parquet::arrow::WriteTable(
 72      *table->Slice(5), arrow::default_memory_pool(), output, 2048));
 73  return base_path;
 74}
 75
 76arrow::Status PrepareEnv() {
 77  // Get our environment prepared for reading, by setting up some quick writing.
 78  ARROW_ASSIGN_OR_RAISE(auto src_table, CreateTable())
 79  std::shared_ptr<arrow::fs::FileSystem> setup_fs;
 80  // Note this operates in the directory the executable is built in.
 81  char setup_path[256];
 82  char* result = getcwd(setup_path, 256);
 83  if (result == NULL) {
 84    return arrow::Status::IOError("Fetching PWD failed.");
 85  }
 86
 87  ARROW_ASSIGN_OR_RAISE(setup_fs, arrow::fs::FileSystemFromUriOrPath(setup_path));
 88  ARROW_ASSIGN_OR_RAISE(auto dset_path, CreateExampleParquetDataset(setup_fs, ""));
 89
 90  return arrow::Status::OK();
 91}
 92// (Doc section: Helper Functions)
 93
 94// (Doc section: RunMain)
 95arrow::Status RunMain() {
 96  // (Doc section: RunMain)
 97  // (Doc section: PrepareEnv)
 98  ARROW_RETURN_NOT_OK(PrepareEnv());
 99  // (Doc section: PrepareEnv)
100
101  // (Doc section: FileSystem Declare)
102  // First, we need a filesystem object, which lets us interact with our local
103  // filesystem starting at a given path. For the sake of simplicity, that'll be
104  // the current directory.
105  std::shared_ptr<arrow::fs::FileSystem> fs;
106  // (Doc section: FileSystem Declare)
107
108  // (Doc section: FileSystem Init)
109  // Get the CWD, use it to make the FileSystem object.
110  char init_path[256];
111  char* result = getcwd(init_path, 256);
112  if (result == NULL) {
113    return arrow::Status::IOError("Fetching PWD failed.");
114  }
115  ARROW_ASSIGN_OR_RAISE(fs, arrow::fs::FileSystemFromUriOrPath(init_path));
116  // (Doc section: FileSystem Init)
117
118  // (Doc section: FileSelector Declare)
119  // A file selector lets us actually traverse a multi-file dataset.
120  arrow::fs::FileSelector selector;
121  // (Doc section: FileSelector Declare)
122  // (Doc section: FileSelector Config)
123  selector.base_dir = "parquet_dataset";
124  // Recursive is a safe bet if you don't know the nesting of your dataset.
125  selector.recursive = true;
126  // (Doc section: FileSelector Config)
127  // (Doc section: FileSystemFactoryOptions)
128  // Making an options object lets us configure our dataset reading.
129  arrow::dataset::FileSystemFactoryOptions options;
130  // We'll use Hive-style partitioning. We'll let Arrow Datasets infer the partition
131  // schema. We won't set any other options, defaults are fine.
132  options.partitioning = arrow::dataset::HivePartitioning::MakeFactory();
133  // (Doc section: FileSystemFactoryOptions)
134  // (Doc section: File Format Setup)
135  auto read_format = std::make_shared<arrow::dataset::ParquetFileFormat>();
136  // (Doc section: File Format Setup)
137  // (Doc section: FileSystemDatasetFactory Make)
138  // Now, we get a factory that will let us get our dataset -- we don't have the
139  // dataset yet!
140  ARROW_ASSIGN_OR_RAISE(auto factory, arrow::dataset::FileSystemDatasetFactory::Make(
141                                          fs, selector, read_format, options));
142  // (Doc section: FileSystemDatasetFactory Make)
143  // (Doc section: FileSystemDatasetFactory Finish)
144  // Now we build our dataset from the factory.
145  ARROW_ASSIGN_OR_RAISE(auto read_dataset, factory->Finish());
146  // (Doc section: FileSystemDatasetFactory Finish)
147  // (Doc section: Dataset Fragments)
148  // Print out the fragments
149  ARROW_ASSIGN_OR_RAISE(auto fragments, read_dataset->GetFragments());
150  for (const auto& fragment : fragments) {
151    std::cout << "Found fragment: " << (*fragment)->ToString() << std::endl;
152    std::cout << "Partition expression: "
153              << (*fragment)->partition_expression().ToString() << std::endl;
154  }
155  // (Doc section: Dataset Fragments)
156  // (Doc section: Read Scan Builder)
157  // Scan dataset into a Table -- once this is done, you can do
158  // normal table things with it, like computation and printing. However, now you're
159  // also dedicated to being in memory.
160  ARROW_ASSIGN_OR_RAISE(auto read_scan_builder, read_dataset->NewScan());
161  // (Doc section: Read Scan Builder)
162  // (Doc section: Read Scanner)
163  ARROW_ASSIGN_OR_RAISE(auto read_scanner, read_scan_builder->Finish());
164  // (Doc section: Read Scanner)
165  // (Doc section: To Table)
166  ARROW_ASSIGN_OR_RAISE(std::shared_ptr<arrow::Table> table, read_scanner->ToTable());
167  std::cout << table->ToString();
168  // (Doc section: To Table)
169
170  // (Doc section: TableBatchReader)
171  // Now, let's get a table out to disk as a dataset!
172  // We make a RecordBatchReader from our Table, then set up a scanner, which lets us
173  // go to a file.
174  std::shared_ptr<arrow::TableBatchReader> write_dataset =
175      std::make_shared<arrow::TableBatchReader>(table);
176  // (Doc section: TableBatchReader)
177  // (Doc section: WriteScanner)
178  auto write_scanner_builder =
179      arrow::dataset::ScannerBuilder::FromRecordBatchReader(write_dataset);
180  ARROW_ASSIGN_OR_RAISE(auto write_scanner, write_scanner_builder->Finish())
181  // (Doc section: WriteScanner)
182  // (Doc section: Partition Schema)
183  // The partition schema determines which fields are used as keys for partitioning.
184  auto partition_schema = arrow::schema({arrow::field("a", arrow::utf8())});
185  // (Doc section: Partition Schema)
186  // (Doc section: Partition Create)
187  // We'll use Hive-style partitioning, which creates directories with "key=value"
188  // pairs.
189  auto partitioning =
190      std::make_shared<arrow::dataset::HivePartitioning>(partition_schema);
191  // (Doc section: Partition Create)
192  // (Doc section: Write Format)
193  // Now, we declare we'll be writing Parquet files.
194  auto write_format = std::make_shared<arrow::dataset::ParquetFileFormat>();
195  // (Doc section: Write Format)
196  // (Doc section: Write Options)
197  // This time, we make Options for writing, but do much more configuration.
198  arrow::dataset::FileSystemDatasetWriteOptions write_options;
199  // Defaults to start.
200  write_options.file_write_options = write_format->DefaultWriteOptions();
201  // (Doc section: Write Options)
202  // (Doc section: Options FS)
203  // Use the filesystem we already have.
204  write_options.filesystem = fs;
205  // (Doc section: Options FS)
206  // (Doc section: Options Target)
207  // Write to the folder "write_dataset" in current directory.
208  write_options.base_dir = "write_dataset";
209  // (Doc section: Options Target)
210  // (Doc section: Options Partitioning)
211  // Use the partitioning declared above.
212  write_options.partitioning = partitioning;
213  // (Doc section: Options Partitioning)
214  // (Doc section: Options Name Template)
215  // Define what the name for the files making up the dataset will be.
216  write_options.basename_template = "part{i}.parquet";
217  // (Doc section: Options Name Template)
218  // (Doc section: Options File Behavior)
219  // Set behavior to overwrite existing data -- specifically, this lets this example
220  // be run more than once, and allows whatever code you have to overwrite what's there.
221  write_options.existing_data_behavior =
222      arrow::dataset::ExistingDataBehavior::kOverwriteOrIgnore;
223  // (Doc section: Options File Behavior)
224  // (Doc section: Write Dataset)
225  // Write to disk!
226  ARROW_RETURN_NOT_OK(
227      arrow::dataset::FileSystemDataset::Write(write_options, write_scanner));
228  // (Doc section: Write Dataset)
229  // (Doc section: Ret)
230  return arrow::Status::OK();
231}
232// (Doc section: Ret)
233// (Doc section: Main)
234int main() {
235  arrow::Status st = RunMain();
236  if (!st.ok()) {
237    std::cerr << st << std::endl;
238    return 1;
239  }
240  return 0;
241}
242// (Doc section: Main)