Acero: A C++ streaming execution engine

Warning

Acero is experimental and a stable API is not yet guaranteed.

Motivation

For many complex computations, successive direct invocation of compute functions is not feasible in either memory or computation time. Doing so causes all intermediate data to be fully materialized. To facilitate arbitrarily large inputs and more efficient resource usage, the Arrow C++ implementation also provides Acero, a streaming query engine with which computations can be formulated and executed.

An example graph of a streaming execution workflow.

Acero allows computation to be expressed as an “execution plan” (ExecPlan) which is a directed graph of operators. Each operator (ExecNode) provides, transforms, or consumes the data passing through it. Batches of data (ExecBatch) flow along edges of the graph from node to node. Structuring the API around streams of batches allows the working set for each node to be tuned for optimal performance independent of any other nodes in the graph. Each ExecNode processes batches as they are pushed to it along an edge of the graph by upstream nodes (its inputs), and pushes batches along an edge of the graph to downstream nodes (its outputs) as they are finalized.

Substrait

In order to use Acero you will need to create an execution plan. This is the model that describes the computation you want to apply to your data. Acero has its own internal representation for execution plans but most users should not interact with this directly as it will couple their code to Acero.

Substrait is an open standard for execution plans. Acero implements the Substrait “consumer” interface. This means that Acero can accept a Substrait plan and fulfill the plan, loading the requested data and applying the desired computation. By using Substrait plans users can easily switch out to a different execution engine at a later time.

Substrait Conformance

Substrait defines a broad set of operators and functions for many different situations and it is unlikely that Acero will ever completely satisfy all defined Substrait operators and functions. To help understand what features are available the following sections define which features have been currently implemented in Acero and any caveats that apply.

Plans

  • A plan should have a single top-level relation.

  • The consumer is currently based on a custom build of Substrait that is older than 0.1.0. Any features added that are newer than 0.1.0 will not be supported.

Extensions

  • If a plan contains any extension type variations it will be rejected.

  • If a plan contains any advanced extensions it will be rejected.

Relations (in general)

  • The emit property (to customize output order of a node or to drop columns) is not supported and plans containing this property will be rejected.

  • The hint property is not supported and plans containing this property will be rejected.

  • Any advanced extensions will cause a plan to be rejected.

  • Any relation not explicitly listed below will not be supported and will cause the plan to be rejected.

Read Relations

  • The projection property is not supported and plans containing this property will be rejected.

  • The only supported read type is LocalFiles. Plans with any other type will be rejected.

  • Only the parquet file format is currently supported.

  • All URIs must use the file scheme

  • partition_index, start, and length are not supported. Plans containing these properties will be rejected.

  • The Substrait spec requires that a filter be completely satisfied by a read relation. However, Acero only uses a read filter for pushdown projection and it may not be fully satisfied. Users should generally attach an additional filter relation with the same filter expression after the read relation.

Filter Relations

  • No known caveats

Project Relations

  • No known caveats

Join Relations

  • The join type JOIN_TYPE_SINGLE is not supported and plans containing this will be rejected.

  • The join expression must be a call to either the equal or is_not_distinct_from functions. Both arguments to the call must be direct references. Only a single join key is supported.

  • The post_join_filter property is not supported and will be ignored.

Aggregate Relations

  • At most one grouping set is supported.

  • Each grouping expression must be a direct reference.

  • Each measure’s arguments must be direct references.

  • A measure may not have a filter

  • A measure may not have sorts

  • A measure’s invocation must be AGGREGATION_INVOCATION_ALL

  • A measure’s phase must be AGGREGATION_PHASE_INITIAL_TO_RESULT

Expressions (general)

  • Various places in the Substrait spec allow for expressions to be used outside of a filter or project relation. For example, a join expression or an aggregate grouping set. Acero typically expects these expressions to be direct references. Planners should extract the implicit projection into a formal project relation before delivering the plan to Acero.

  • Older versions of Isthmus would omit optional arguments instead of including them as unspecified enums. Acero will not support these plans.

Literals

  • A literal with non-default nullability will cause a plan to be rejected.

Types

  • Acero does not have full support for non-nullable types and may allow input to have nulls without rejecting it.

  • The table below shows the mapping between Arrow types and Substrait type classes that are currently supported

Substrait / Arrow Type Mapping

Substrait Type

Arrow Type

Caveat

boolean

boolean

i8

int8

i16

int16

i32

int32

i64

int64

fp32

float32

fp64

float64

string

string

binary

binary

timestamp

timestamp<MICRO,””>

timestamp_tz

timestamp<MICRO,”UTC”>

date

date32<DAY>

time

time64<MICRO>

interval_year

Not currently supported

interval_day

Not currently supported

uuid

Not currently supported

FIXEDCHAR<L>

Not currently supported

VARCHAR<L>

Not currently supported

FIXEDBINARY<L>

fixed_size_binary<L>

DECIMAL<P,S>

decimal128<P,S>

STRUCT<T1…TN>

struct<T1…TN>

Arrow struct fields will have no name (empty string)

NSTRUCT<N:T1…N:Tn>

Not currently supported

LIST<T>

list<T>

MAP<K,V>

map<K,V>

K must not be nullable

Functions

  • Acero does not support the legacy args style of declaring arguments

  • The following functions have caveats or are not supported at all. Note that this is not a comprehensive list. Functions are being added to Substrait at a rapid pace and new functions may be missing.

    • Acero does not support the SATURATE option for overflow

    • Acero does not support kernels that take more than two arguments for the functions and, or, xor

    • Acero does not support temporal arithmetic

    • Acero does not support the following standard functions:

      • is_not_distinct_from

      • like

      • substring

      • starts_with

      • ends_with

      • contains

      • count

      • count_distinct

      • approx_count_distinct

  • The functions above must be referenced using the URI https://github.com/apache/arrow/blob/master/format/substrait/extension_types.yaml

Architecture Overview

ExecNode

Each node in the graph is an implementation of the ExecNode interface.

ExecPlan

A set of ExecNode is contained and (to an extent) coordinated by an ExecPlan.

ExecFactoryRegistry

Instances of ExecNode are constructed by factory functions held in a ExecFactoryRegistry.

ExecNodeOptions

Heterogenous parameters for factories of ExecNode are bundled in an ExecNodeOptions.

Declaration

dplyr-inspired helper for efficient construction of an ExecPlan.

ExecBatch

A lightweight container for a single chunk of data in the Arrow format. In contrast to RecordBatch, ExecBatch is intended for use exclusively in a streaming execution context (for example, it doesn’t have a corresponding Python binding). Furthermore columns which happen to have a constant value may be represented by a Scalar instead of an Array. In addition, ExecBatch may carry execution-relevant properties including a guaranteed-true-filter for Expression simplification.

An example ExecNode implementation which simply passes all input batches through unchanged:

class PassthruNode : public ExecNode {
 public:
  // InputReceived is the main entry point for ExecNodes. It is invoked
  // by an input of this node to push a batch here for processing.
  void InputReceived(ExecNode* input, ExecBatch batch) override {
    // Since this is a passthru node we simply push the batch to our
    // only output here.
    outputs_[0]->InputReceived(this, batch);
  }

  // ErrorReceived is called by an input of this node to report an error.
  // ExecNodes should always forward errors to their outputs unless they
  // are able to fully handle the error (this is rare).
  void ErrorReceived(ExecNode* input, Status error) override {
    outputs_[0]->ErrorReceived(this, error);
  }

  // InputFinished is used to signal how many batches will ultimately arrive.
  // It may be called with any ordering relative to InputReceived/ErrorReceived.
  void InputFinished(ExecNode* input, int total_batches) override {
    outputs_[0]->InputFinished(this, total_batches);
  }

  // ExecNodes may request that their inputs throttle production of batches
  // until they are ready for more, or stop production if no further batches
  // are required.  These signals should typically be forwarded to the inputs
  // of the ExecNode.
  void ResumeProducing(ExecNode* output) override { inputs_[0]->ResumeProducing(this); }
  void PauseProducing(ExecNode* output) override { inputs_[0]->PauseProducing(this); }
  void StopProducing(ExecNode* output) override { inputs_[0]->StopProducing(this); }

  // An ExecNode has a single output schema to which all its batches conform.
  using ExecNode::output_schema;

  // ExecNodes carry basic introspection for debugging purposes
  const char* kind_name() const override { return "PassthruNode"; }
  using ExecNode::label;
  using ExecNode::SetLabel;
  using ExecNode::ToString;

  // An ExecNode holds references to its inputs and outputs, so it is possible
  // to walk the graph of execution if necessary.
  using ExecNode::inputs;
  using ExecNode::outputs;

  // StartProducing() and StopProducing() are invoked by an ExecPlan to
  // coordinate the graph-wide execution state.  These do not need to be
  // forwarded to inputs or outputs.
  Status StartProducing() override { return Status::OK(); }
  void StopProducing() override {}
  Future<> finished() override { return inputs_[0]->finished(); }
};

Note that each method which is associated with an edge of the graph must be invoked with an ExecNode* to identify the node which invoked it. For example, in an ExecNode which implements JOIN this tagging might be used to differentiate between batches from the left or right inputs. InputReceived, ErrorReceived, InputFinished may only be invoked by the inputs of a node, while ResumeProducing, PauseProducing, StopProducing may only be invoked by outputs of a node.

ExecPlan contains the associated instances of ExecNode and is used to start and stop execution of all nodes and for querying/awaiting their completion:

// construct an ExecPlan first to hold your nodes
ARROW_ASSIGN_OR_RAISE(auto plan, ExecPlan::Make(default_exec_context()));

// ... add nodes to your ExecPlan

// start all nodes in the graph
ARROW_RETURN_NOT_OK(plan->StartProducing());

SetUserCancellationCallback([plan] {
  // stop all nodes in the graph
  plan->StopProducing();
});

// Complete will be marked finished when all nodes have run to completion
// or acknowledged a StopProducing() signal. The ExecPlan should be kept
// alive until this future is marked finished.
Future<> complete = plan->finished();

Constructing ExecPlan objects

Warning

The following will be superceded by construction from Compute IR, see ARROW-14074.

None of the concrete implementations of ExecNode are exposed in headers, so they can’t be constructed directly outside the translation unit where they are defined. Instead, factories to create them are provided in an extensible registry. This structure provides a number of benefits:

  • This enforces consistent construction.

  • It decouples implementations from consumers of the interface (for example: we have two classes for scalar and grouped aggregate, we can choose which to construct within the single factory by checking whether grouping keys are provided)

  • This expedites integration with out-of-library extensions. For example “scan” nodes are implemented in the separate libarrow_dataset.so library.

  • Since the class is not referencable outside the translation unit in which it is defined, compilers can optimize more aggressively.

Factories of ExecNode can be retrieved by name from the registry. The default registry is available through arrow::compute::default_exec_factory_registry() and can be queried for the built-in factories:

// get the factory for "filter" nodes:
ARROW_ASSIGN_OR_RAISE(auto make_filter,
                      default_exec_factory_registry()->GetFactory("filter"));

// factories take three arguments:
ARROW_ASSIGN_OR_RAISE(ExecNode* filter_node, *make_filter(
    // the ExecPlan which should own this node
    plan.get(),

    // nodes which will send batches to this node (inputs)
    {scan_node},

    // parameters unique to "filter" nodes
    FilterNodeOptions{filter_expression}));

// alternative shorthand:
ARROW_ASSIGN_OR_RAISE(filter_node, MakeExecNode("filter",
    plan.get(), {scan_node}, FilterNodeOptions{filter_expression});

Factories can also be added to the default registry as long as they are convertible to std::function<Result<ExecNode*>( ExecPlan*, std::vector<ExecNode*>, const ExecNodeOptions&)>.

To build an ExecPlan representing a simple pipeline which reads from a RecordBatchReader then filters, projects, and writes to disk:

std::shared_ptr<RecordBatchReader> reader = GetStreamOfBatches();
ExecNode* source_node = *MakeExecNode("source", plan.get(), {},
                                      SourceNodeOptions::FromReader(
                                          reader,
                                          GetCpuThreadPool()));

ExecNode* filter_node = *MakeExecNode("filter", plan.get(), {source_node},
                                      FilterNodeOptions{
                                        greater(field_ref("score"), literal(3))
                                      });

ExecNode* project_node = *MakeExecNode("project", plan.get(), {filter_node},
                                       ProjectNodeOptions{
                                         {add(field_ref("score"), literal(1))},
                                         {"score + 1"}
                                       });

arrow::dataset::internal::Initialize();
MakeExecNode("write", plan.get(), {project_node},
             WriteNodeOptions{/*base_dir=*/"/dat", /*...*/});

Declaration is a dplyr-inspired helper which further decreases the boilerplate associated with populating an ExecPlan from C++:

arrow::dataset::internal::Initialize();

std::shared_ptr<RecordBatchReader> reader = GetStreamOfBatches();
ASSERT_OK(Declaration::Sequence(
              {
                  {"source", SourceNodeOptions::FromReader(
                       reader,
                       GetCpuThreadPool())},
                  {"filter", FilterNodeOptions{
                       greater(field_ref("score"), literal(3))}},
                  {"project", ProjectNodeOptions{
                       {add(field_ref("score"), literal(1))},
                       {"score + 1"}}},
                  {"write", WriteNodeOptions{/*base_dir=*/"/dat", /*...*/}},
              })
              .AddToPlan(plan.get()));

Note that a source node can wrap anything which resembles a stream of batches. For example, PR#11032 adds support for use of a DuckDB query as a source node. Similarly, a sink node can wrap anything which absorbs a stream of batches. In the example above we’re writing completed batches to disk. However we can also collect these in memory into a Table or forward them to a RecordBatchReader as an out-of-graph stream. This flexibility allows an ExecPlan to be used as streaming middleware between any endpoints which support Arrow formatted batches.

An arrow::dataset::Dataset can also be wrapped as a source node which pushes all the dataset’s batches into an ExecPlan. This factory is added to the default registry with the name "scan" by calling arrow::dataset::internal::Initialize():

arrow::dataset::internal::Initialize();

std::shared_ptr<Dataset> dataset = GetDataset();

ASSERT_OK(Declaration::Sequence(
              {
                  {"scan", ScanNodeOptions{dataset,
                     /* push down predicate, projection, ... */}},
                  {"filter", FilterNodeOptions{/* ... */}},
                  // ...
              })
              .AddToPlan(plan.get()));

Datasets may be scanned multiple times; just make multiple scan nodes from that dataset. (Useful for a self-join, for example.) Note that producing two scan nodes like this will perform all reads and decodes twice.

Constructing ExecNode using Options

ExecNode is the component we use as a building block containing in-built operations with various functionalities.

This is the list of operations associated with the execution plan:

Operations and Options

Operation

Options

source

arrow::compute::SourceNodeOptions

table_source

arrow::compute::TableSourceNodeOptions

filter

arrow::compute::FilterNodeOptions

project

arrow::compute::ProjectNodeOptions

aggregate

arrow::compute::AggregateNodeOptions

sink

arrow::compute::SinkNodeOptions

consuming_sink

arrow::compute::ConsumingSinkNodeOptions

order_by_sink

arrow::compute::OrderBySinkNodeOptions

select_k_sink

arrow::compute::SelectKSinkNodeOptions

scan

arrow::dataset::ScanNodeOptions

hash_join

arrow::compute::HashJoinNodeOptions

write

arrow::dataset::WriteNodeOptions

union

N/A

table_sink

arrow::compute::TableSinkNodeOptions

source

A source operation can be considered as an entry point to create a streaming execution plan. arrow::compute::SourceNodeOptions are used to create the source operation. The source operation is the most generic and flexible type of source currently available but it can be quite tricky to configure. To process data from files the scan operation is likely a simpler choice.

The source node requires some kind of function that can be called to poll for more data. This function should take no arguments and should return an arrow::Future<std::optional<arrow::ExecBatch>>. This function might be reading a file, iterating through an in memory structure, or receiving data from a network connection. The arrow library refers to these functions as arrow::AsyncGenerator and there are a number of utilities for working with these functions. For this example we use a vector of record batches that we’ve already stored in memory. In addition, the schema of the data must be known up front. Acero must know the schema of the data at each stage of the execution graph before any processing has begun. This means we must supply the schema for a source node separately from the data itself.

Here we define a struct to hold the data generator definition. This includes in-memory batches, schema and a function that serves as a data generator :

155struct BatchesWithSchema {
156  std::vector<cp::ExecBatch> batches;
157  std::shared_ptr<arrow::Schema> schema;
158  // This method uses internal arrow utilities to
159  // convert a vector of record batches to an AsyncGenerator of optional batches
160  arrow::AsyncGenerator<std::optional<cp::ExecBatch>> gen() const {
161    auto opt_batches = ::arrow::internal::MapVector(
162        [](cp::ExecBatch batch) { return std::make_optional(std::move(batch)); },
163        batches);
164    arrow::AsyncGenerator<std::optional<cp::ExecBatch>> gen;
165    gen = arrow::MakeVectorGenerator(std::move(opt_batches));
166    return gen;
167  }
168};

Generating sample batches for computation:

172arrow::Result<BatchesWithSchema> MakeBasicBatches() {
173  BatchesWithSchema out;
174  auto field_vector = {arrow::field("a", arrow::int32()),
175                       arrow::field("b", arrow::boolean())};
176  ARROW_ASSIGN_OR_RAISE(auto b1_int, GetArrayDataSample<arrow::Int32Type>({0, 4}));
177  ARROW_ASSIGN_OR_RAISE(auto b2_int, GetArrayDataSample<arrow::Int32Type>({5, 6, 7}));
178  ARROW_ASSIGN_OR_RAISE(auto b3_int, GetArrayDataSample<arrow::Int32Type>({8, 9, 10}));
179
180  ARROW_ASSIGN_OR_RAISE(auto b1_bool,
181                        GetArrayDataSample<arrow::BooleanType>({false, true}));
182  ARROW_ASSIGN_OR_RAISE(auto b2_bool,
183                        GetArrayDataSample<arrow::BooleanType>({true, false, true}));
184  ARROW_ASSIGN_OR_RAISE(auto b3_bool,
185                        GetArrayDataSample<arrow::BooleanType>({false, true, false}));
186
187  ARROW_ASSIGN_OR_RAISE(auto b1,
188                        GetExecBatchFromVectors(field_vector, {b1_int, b1_bool}));
189  ARROW_ASSIGN_OR_RAISE(auto b2,
190                        GetExecBatchFromVectors(field_vector, {b2_int, b2_bool}));
191  ARROW_ASSIGN_OR_RAISE(auto b3,
192                        GetExecBatchFromVectors(field_vector, {b3_int, b3_bool}));
193
194  out.batches = {b1, b2, b3};
195  out.schema = arrow::schema(field_vector);
196  return out;
197}

Example of using source (usage of sink is explained in detail in sink):

293/// \brief An example demonstrating a source and sink node
294///
295/// Source-Table Example
296/// This example shows how a custom source can be used
297/// in an execution plan. This includes source node using pregenerated
298/// data and collecting it into a table.
299///
300/// This sort of custom souce is often not needed.  In most cases you can
301/// use a scan (for a dataset source) or a source like table_source, array_vector_source,
302/// exec_batch_source, or record_batch_source (for in-memory data)
303arrow::Status SourceSinkExample() {
304  ARROW_ASSIGN_OR_RAISE(auto basic_data, MakeBasicBatches());
305
306  auto source_node_options = cp::SourceNodeOptions{basic_data.schema, basic_data.gen()};
307
308  cp::Declaration source{"source", std::move(source_node_options)};
309
310  return ExecutePlanAndCollectAsTable(std::move(source));
311}

table_source

In the previous example, source node, a source node was used to input the data. But when developing an application, if the data is already in memory as a table, it is much easier, and more performant to use arrow::compute::TableSourceNodeOptions. Here the input data can be passed as a std::shared_ptr<arrow::Table> along with a max_batch_size. The max_batch_size is to break up large record batches so that they can be processed in parallel. It is important to note that the table batches will not get merged to form larger batches when the source table has a smaller batch size.

Example of using table_source

316/// \brief An example showing a table source node
317///
318/// TableSource-Table Example
319/// This example shows how a table_source can be used
320/// in an execution plan. This includes a table source node
321/// receiving data from a table.  This plan simply collects the
322/// data back into a table but nodes could be added that modify
323/// or transform the data as well (as is shown in later examples)
324arrow::Status TableSourceSinkExample() {
325  ARROW_ASSIGN_OR_RAISE(auto table, GetTable());
326
327  arrow::AsyncGenerator<std::optional<cp::ExecBatch>> sink_gen;
328  int max_batch_size = 2;
329  auto table_source_options = cp::TableSourceNodeOptions{table, max_batch_size};
330
331  cp::Declaration source{"table_source", std::move(table_source_options)};
332
333  return ExecutePlanAndCollectAsTable(std::move(source));
334}

filter

filter operation, as the name suggests, provides an option to define data filtering criteria. It selects rows matching a given expression. Filters can be written using arrow::compute::Expression. For example, if we wish to keep rows where the value of column b is greater than 3, then we can use the following expression.

Filter example:

339/// \brief An example showing a filter node
340///
341/// Source-Filter-Table
342/// This example shows how a filter can be used in an execution plan,
343/// to filter data from a source. The output from the exeuction plan
344/// is collected into a table.
345arrow::Status ScanFilterSinkExample() {
346  ARROW_ASSIGN_OR_RAISE(std::shared_ptr<arrow::dataset::Dataset> dataset, GetDataset());
347
348  auto options = std::make_shared<arrow::dataset::ScanOptions>();
349  // specify the filter.  This filter removes all rows where the
350  // value of the "a" column is greater than 3.
351  cp::Expression filter_expr = cp::greater(cp::field_ref("a"), cp::literal(3));
352  // set filter for scanner : on-disk / push-down filtering.
353  // This step can be skipped if you are not reading from disk.
354  options->filter = filter_expr;
355  // empty projection
356  options->projection = cp::project({}, {});
357
358  // construct the scan node
359  std::cout << "Initialized Scanning Options" << std::endl;
360
361  auto scan_node_options = arrow::dataset::ScanNodeOptions{dataset, options};
362  std::cout << "Scan node options created" << std::endl;
363
364  cp::Declaration scan{"scan", std::move(scan_node_options)};
365
366  // pipe the scan node into the filter node
367  // Need to set the filter in scan node options and filter node options.
368  // At scan node it is used for on-disk / push-down filtering.
369  // At filter node it is used for in-memory filtering.
370  cp::Declaration filter{
371      "filter", {std::move(scan)}, cp::FilterNodeOptions(std::move(filter_expr))};
372
373  return ExecutePlanAndCollectAsTable(std::move(filter));
374}

project

project operation rearranges, deletes, transforms, and creates columns. Each output column is computed by evaluating an expression against the source record batch. This is exposed via arrow::compute::ProjectNodeOptions which requires, an arrow::compute::Expression and name for each of the output columns (if names are not provided, the string representations of exprs will be used).

Project example:

380/// \brief An example showing a project node
381///
382/// Scan-Project-Table
383/// This example shows how a Scan operation can be used to load the data
384/// into the execution plan, how a project operation can be applied on the
385/// data stream and how the output is collected into a table
386arrow::Status ScanProjectSinkExample() {
387  ARROW_ASSIGN_OR_RAISE(std::shared_ptr<arrow::dataset::Dataset> dataset, GetDataset());
388
389  auto options = std::make_shared<arrow::dataset::ScanOptions>();
390  // projection
391  cp::Expression a_times_2 = cp::call("multiply", {cp::field_ref("a"), cp::literal(2)});
392  options->projection = cp::project({}, {});
393
394  auto scan_node_options = arrow::dataset::ScanNodeOptions{dataset, options};
395
396  cp::Declaration scan{"scan", std::move(scan_node_options)};
397  cp::Declaration project{
398      "project", {std::move(scan)}, cp::ProjectNodeOptions({a_times_2})};
399
400  return ExecutePlanAndCollectAsTable(std::move(project));
401}

aggregate

The aggregate node computes various types of aggregates over data.

Arrow supports two types of aggregates: “scalar” aggregates, and “hash” aggregates. Scalar aggregates reduce an array or scalar input to a single scalar output (e.g. computing the mean of a column). Hash aggregates act like GROUP BY in SQL and first partition data based on one or more key columns, then reduce the data in each partition. The aggregate node supports both types of computation, and can compute any number of aggregations at once.

arrow::compute::AggregateNodeOptions is used to define the aggregation criteria. It takes a list of aggregation functions and their options; a list of target fields to aggregate, one per function; and a list of names for the output fields, one per function. Optionally, it takes a list of columns that are used to partition the data, in the case of a hash aggregation. The aggregation functions can be selected from this list of aggregation functions.

Note

This node is a “pipeline breaker” and will fully materialize the dataset in memory. In the future, spillover mechanisms will be added which should alleviate this constraint.

The aggregation can provide results as a group or scalar. For instances, an operation like hash_count provides the counts per each unique record as a grouped result while an operation like sum provides a single record.

Scalar Aggregation example:

407/// \brief An example showing an aggregation node to aggregate an entire table
408///
409/// Source-Aggregation-Table
410/// This example shows how an aggregation operation can be applied on a
411/// execution plan resulting in a scalar output. The source node loads the
412/// data and the aggregation (counting unique types in column 'a')
413/// is applied on this data. The output is collected into a table (that will
414/// have exactly one row)
415arrow::Status SourceScalarAggregateSinkExample() {
416  ARROW_ASSIGN_OR_RAISE(auto basic_data, MakeBasicBatches());
417
418  auto source_node_options = cp::SourceNodeOptions{basic_data.schema, basic_data.gen()};
419
420  cp::Declaration source{"source", std::move(source_node_options)};
421  auto aggregate_options =
422      cp::AggregateNodeOptions{/*aggregates=*/{{"sum", nullptr, "a", "sum(a)"}}};
423  cp::Declaration aggregate{
424      "aggregate", {std::move(source)}, std::move(aggregate_options)};
425
426  return ExecutePlanAndCollectAsTable(std::move(aggregate));
427}

Group Aggregation example:

432/// \brief An example showing an aggregation node to perform a group-by operation
433///
434/// Source-Aggregation-Table
435/// This example shows how an aggregation operation can be applied on a
436/// execution plan resulting in grouped output. The source node loads the
437/// data and the aggregation (counting unique types in column 'a') is
438/// applied on this data. The output is collected into a table that will contain
439/// one row for each unique combination of group keys.
440arrow::Status SourceGroupAggregateSinkExample() {
441  ARROW_ASSIGN_OR_RAISE(auto basic_data, MakeBasicBatches());
442
443  arrow::AsyncGenerator<std::optional<cp::ExecBatch>> sink_gen;
444
445  auto source_node_options = cp::SourceNodeOptions{basic_data.schema, basic_data.gen()};
446
447  cp::Declaration source{"source", std::move(source_node_options)};
448  auto options = std::make_shared<cp::CountOptions>(cp::CountOptions::ONLY_VALID);
449  auto aggregate_options =
450      cp::AggregateNodeOptions{/*aggregates=*/{{"hash_count", options, "a", "count(a)"}},
451                               /*keys=*/{"b"}};
452  cp::Declaration aggregate{
453      "aggregate", {std::move(source)}, std::move(aggregate_options)};
454
455  return ExecutePlanAndCollectAsTable(std::move(aggregate));
456}

sink

sink operation provides output and is the final node of a streaming execution definition. arrow::compute::SinkNodeOptions interface is used to pass the required options. Similar to the source operator the sink operator exposes the output with a function that returns a record batch future each time it is called. It is expected the caller will repeatedly call this function until the generator function is exhausted (returns std::optional::nullopt). If this function is not called often enough then record batches will accumulate in memory. An execution plan should only have one “terminal” node (one sink node). An ExecPlan can terminate early due to cancellation or an error, before the output is fully consumed. However, the plan can be safely destroyed independently of the sink, which will hold the unconsumed batches by exec_plan->finished().

As a part of the Source Example, the Sink operation is also included;

293/// \brief An example demonstrating a source and sink node
294///
295/// Source-Table Example
296/// This example shows how a custom source can be used
297/// in an execution plan. This includes source node using pregenerated
298/// data and collecting it into a table.
299///
300/// This sort of custom souce is often not needed.  In most cases you can
301/// use a scan (for a dataset source) or a source like table_source, array_vector_source,
302/// exec_batch_source, or record_batch_source (for in-memory data)
303arrow::Status SourceSinkExample() {
304  ARROW_ASSIGN_OR_RAISE(auto basic_data, MakeBasicBatches());
305
306  auto source_node_options = cp::SourceNodeOptions{basic_data.schema, basic_data.gen()};
307
308  cp::Declaration source{"source", std::move(source_node_options)};
309
310  return ExecutePlanAndCollectAsTable(std::move(source));
311}

consuming_sink

consuming_sink operator is a sink operation containing consuming operation within the execution plan (i.e. the exec plan should not complete until the consumption has completed). Unlike the sink node this node takes in a callback function that is expected to consume the batch. Once this callback has finished the execution plan will no longer hold any reference to the batch. The consuming function may be called before a previous invocation has completed. If the consuming function does not run quickly enough then many concurrent executions could pile up, blocking the CPU thread pool. The execution plan will not be marked finished until all consuming function callbacks have been completed. Once all batches have been delivered the execution plan will wait for the finish future to complete before marking the execution plan finished. This allows for workflows where the consumption function converts batches into async tasks (this is currently done internally for the dataset write node).

Example:

// define a Custom SinkNodeConsumer
std::atomic<uint32_t> batches_seen{0};
arrow::Future<> finish = arrow::Future<>::Make();
struct CustomSinkNodeConsumer : public cp::SinkNodeConsumer {

    CustomSinkNodeConsumer(std::atomic<uint32_t> *batches_seen, arrow::Future<>finish):
    batches_seen(batches_seen), finish(std::move(finish)) {}
    // Consumption logic can be written here
    arrow::Status Consume(cp::ExecBatch batch) override {
    // data can be consumed in the expected way
    // transfer to another system or just do some work
    // and write to disk
    (*batches_seen)++;
    return arrow::Status::OK();
    }

    arrow::Future<> Finish() override { return finish; }

    std::atomic<uint32_t> *batches_seen;
    arrow::Future<> finish;

};

std::shared_ptr<CustomSinkNodeConsumer> consumer =
        std::make_shared<CustomSinkNodeConsumer>(&batches_seen, finish);

arrow::compute::ExecNode *consuming_sink;

ARROW_ASSIGN_OR_RAISE(consuming_sink, MakeExecNode("consuming_sink", plan.get(),
    {source}, cp::ConsumingSinkNodeOptions(consumer)));

Consuming-Sink example:

461/// \brief An example showing a consuming sink node
462///
463/// Source-Consuming-Sink
464/// This example shows how the data can be consumed within the execution plan
465/// by using a ConsumingSink node. There is no data output from this execution plan.
466arrow::Status SourceConsumingSinkExample() {
467  ARROW_ASSIGN_OR_RAISE(auto basic_data, MakeBasicBatches());
468
469  auto source_node_options = cp::SourceNodeOptions{basic_data.schema, basic_data.gen()};
470
471  cp::Declaration source{"source", std::move(source_node_options)};
472
473  std::atomic<uint32_t> batches_seen{0};
474  arrow::Future<> finish = arrow::Future<>::Make();
475  struct CustomSinkNodeConsumer : public cp::SinkNodeConsumer {
476    CustomSinkNodeConsumer(std::atomic<uint32_t>* batches_seen, arrow::Future<> finish)
477        : batches_seen(batches_seen), finish(std::move(finish)) {}
478
479    arrow::Status Init(const std::shared_ptr<arrow::Schema>& schema,
480                       cp::BackpressureControl* backpressure_control,
481                       cp::ExecPlan* plan) override {
482      // This will be called as the plan is started (before the first call to Consume)
483      // and provides the schema of the data coming into the node, controls for pausing /
484      // resuming input, and a pointer to the plan itself which can be used to access
485      // other utilities such as the thread indexer or async task scheduler.
486      return arrow::Status::OK();
487    }
488
489    arrow::Status Consume(cp::ExecBatch batch) override {
490      (*batches_seen)++;
491      return arrow::Status::OK();
492    }
493
494    arrow::Future<> Finish() override {
495      // Here you can perform whatever (possibly async) cleanup is needed, e.g. closing
496      // output file handles and flushing remaining work
497      return arrow::Future<>::MakeFinished();
498    }
499
500    std::atomic<uint32_t>* batches_seen;
501    arrow::Future<> finish;
502  };
503  std::shared_ptr<CustomSinkNodeConsumer> consumer =
504      std::make_shared<CustomSinkNodeConsumer>(&batches_seen, finish);
505
506  cp::Declaration consuming_sink{"consuming_sink",
507                                 {std::move(source)},
508                                 cp::ConsumingSinkNodeOptions(std::move(consumer))};
509
510  // Since we are consuming the data within the plan there is no output and we simply
511  // run the plan to completion instead of collecting into a table.
512  ARROW_RETURN_NOT_OK(cp::DeclarationToStatus(std::move(consuming_sink)));
513
514  std::cout << "The consuming sink node saw " << batches_seen.load() << " batches"
515            << std::endl;
516  return arrow::Status::OK();
517}

order_by_sink

order_by_sink operation is an extension to the sink operation. This operation provides the ability to guarantee the ordering of the stream by providing the arrow::compute::OrderBySinkNodeOptions. Here the arrow::compute::SortOptions are provided to define which columns are used for sorting and whether to sort by ascending or descending values.

Note

This node is a “pipeline breaker” and will fully materialize the dataset in memory. In the future, spillover mechanisms will be added which should alleviate this constraint.

Order-By-Sink example:

522arrow::Status ExecutePlanAndCollectAsTableWithCustomSink(
523    std::shared_ptr<cp::ExecPlan> plan, std::shared_ptr<arrow::Schema> schema,
524    arrow::AsyncGenerator<std::optional<cp::ExecBatch>> sink_gen) {
525  // translate sink_gen (async) to sink_reader (sync)
526  std::shared_ptr<arrow::RecordBatchReader> sink_reader =
527      cp::MakeGeneratorReader(schema, std::move(sink_gen), arrow::default_memory_pool());
528
529  // validate the ExecPlan
530  ARROW_RETURN_NOT_OK(plan->Validate());
531  std::cout << "ExecPlan created : " << plan->ToString() << std::endl;
532  // start the ExecPlan
533  ARROW_RETURN_NOT_OK(plan->StartProducing());
534
535  // collect sink_reader into a Table
536  std::shared_ptr<arrow::Table> response_table;
537
538  ARROW_ASSIGN_OR_RAISE(response_table,
539                        arrow::Table::FromRecordBatchReader(sink_reader.get()));
540
541  std::cout << "Results : " << response_table->ToString() << std::endl;
542
543  // stop producing
544  plan->StopProducing();
545  // plan mark finished
546  auto future = plan->finished();
547  return future.status();
548}
549
550/// \brief An example showing an order-by node
551///
552/// Source-OrderBy-Sink
553/// In this example, the data enters through the source node
554/// and the data is ordered in the sink node. The order can be
555/// ASCENDING or DESCENDING and it is configurable. The output
556/// is obtained as a table from the sink node.
557arrow::Status SourceOrderBySinkExample() {
558  ARROW_ASSIGN_OR_RAISE(std::shared_ptr<cp::ExecPlan> plan,
559                        cp::ExecPlan::Make(*cp::threaded_exec_context()));
560
561  ARROW_ASSIGN_OR_RAISE(auto basic_data, MakeSortTestBasicBatches());
562
563  arrow::AsyncGenerator<std::optional<cp::ExecBatch>> sink_gen;
564
565  auto source_node_options = cp::SourceNodeOptions{basic_data.schema, basic_data.gen()};
566  ARROW_ASSIGN_OR_RAISE(cp::ExecNode * source,
567                        cp::MakeExecNode("source", plan.get(), {}, source_node_options));
568
569  ARROW_RETURN_NOT_OK(cp::MakeExecNode(
570      "order_by_sink", plan.get(), {source},
571      cp::OrderBySinkNodeOptions{
572          cp::SortOptions{{cp::SortKey{"a", cp::SortOrder::Descending}}}, &sink_gen}));
573
574  return ExecutePlanAndCollectAsTableWithCustomSink(plan, basic_data.schema, sink_gen);
575}

select_k_sink

select_k_sink option enables selecting the top/bottom K elements, similar to a SQL ORDER BY ... LIMIT K clause. arrow::compute::SelectKOptions which is a defined by using OrderBySinkNode definition. This option returns a sink node that receives inputs and then compute top_k/bottom_k.

Note

This node is a “pipeline breaker” and will fully materialize the input in memory. In the future, spillover mechanisms will be added which should alleviate this constraint.

SelectK example:

608/// \brief An example showing a select-k node
609///
610/// Source-KSelect
611/// This example shows how K number of elements can be selected
612/// either from the top or bottom. The output node is a modified
613/// sink node where output can be obtained as a table.
614arrow::Status SourceKSelectExample() {
615  ARROW_ASSIGN_OR_RAISE(auto input, MakeGroupableBatches());
616  ARROW_ASSIGN_OR_RAISE(std::shared_ptr<cp::ExecPlan> plan,
617                        cp::ExecPlan::Make(*cp::threaded_exec_context()));
618  arrow::AsyncGenerator<std::optional<cp::ExecBatch>> sink_gen;
619
620  ARROW_ASSIGN_OR_RAISE(
621      cp::ExecNode * source,
622      cp::MakeExecNode("source", plan.get(), {},
623                       cp::SourceNodeOptions{input.schema, input.gen()}));
624
625  cp::SelectKOptions options = cp::SelectKOptions::TopKDefault(/*k=*/2, {"i32"});
626
627  ARROW_RETURN_NOT_OK(cp::MakeExecNode("select_k_sink", plan.get(), {source},
628                                       cp::SelectKSinkNodeOptions{options, &sink_gen}));
629
630  auto schema = arrow::schema(
631      {arrow::field("i32", arrow::int32()), arrow::field("str", arrow::utf8())});
632
633  return ExecutePlanAndCollectAsTableWithCustomSink(plan, schema, sink_gen);
634}

table_sink

The table_sink node provides the ability to receive the output as an in-memory table. This is simpler to use than the other sink nodes provided by the streaming execution engine but it only makes sense when the output fits comfortably in memory. The node is created using arrow::compute::TableSinkNodeOptions.

Example of using table_sink

726/// \brief An example showing a table sink node
727///
728/// TableSink Example
729/// This example shows how a table_sink can be used
730/// in an execution plan. This includes a source node
731/// receiving data as batches and the table sink node
732/// which emits the output as a table.
733arrow::Status TableSinkExample() {
734  ARROW_ASSIGN_OR_RAISE(std::shared_ptr<cp::ExecPlan> plan,
735                        cp::ExecPlan::Make(*cp::threaded_exec_context()));
736
737  ARROW_ASSIGN_OR_RAISE(auto basic_data, MakeBasicBatches());
738
739  auto source_node_options = cp::SourceNodeOptions{basic_data.schema, basic_data.gen()};
740
741  ARROW_ASSIGN_OR_RAISE(cp::ExecNode * source,
742                        cp::MakeExecNode("source", plan.get(), {}, source_node_options));
743
744  std::shared_ptr<arrow::Table> output_table;
745  auto table_sink_options = cp::TableSinkNodeOptions{&output_table};
746
747  ARROW_RETURN_NOT_OK(
748      cp::MakeExecNode("table_sink", plan.get(), {source}, table_sink_options));
749  // validate the ExecPlan
750  ARROW_RETURN_NOT_OK(plan->Validate());
751  std::cout << "ExecPlan created : " << plan->ToString() << std::endl;
752  // start the ExecPlan
753  ARROW_RETURN_NOT_OK(plan->StartProducing());
754
755  // Wait for the plan to finish
756  auto finished = plan->finished();
757  RETURN_NOT_OK(finished.status());
758  std::cout << "Results : " << output_table->ToString() << std::endl;
759  return arrow::Status::OK();
760}

scan

scan is an operation used to load and process datasets. It should be preferred over the more generic source node when your input is a dataset. The behavior is defined using arrow::dataset::ScanNodeOptions. More information on datasets and the various scan options can be found in Tabular Datasets.

This node is capable of applying pushdown filters to the file readers which reduce the amount of data that needs to be read. This means you may supply the same filter expression to the scan node that you also supply to the FilterNode because the filtering is done in two different places.

Scan example:

270/// \brief An example demonstrating a scan and sink node
271///
272/// Scan-Table
273/// This example shows how scan operation can be applied on a dataset.
274/// There are operations that can be applied on the scan (project, filter)
275/// and the input data can be processed. The output is obtained as a table
276arrow::Status ScanSinkExample() {
277  ARROW_ASSIGN_OR_RAISE(std::shared_ptr<arrow::dataset::Dataset> dataset, GetDataset());
278
279  auto options = std::make_shared<arrow::dataset::ScanOptions>();
280  options->projection = cp::project({}, {});  // create empty projection
281
282  // construct the scan node
283  auto scan_node_options = arrow::dataset::ScanNodeOptions{dataset, options};
284
285  cp::Declaration scan{"scan", std::move(scan_node_options)};
286
287  return ExecutePlanAndCollectAsTable(std::move(scan));
288}

write

The write node saves query results as a dataset of files in a format like Parquet, Feather, CSV, etc. using the Tabular Datasets functionality in Arrow. The write options are provided via the arrow::dataset::WriteNodeOptions which in turn contains arrow::dataset::FileSystemDatasetWriteOptions. arrow::dataset::FileSystemDatasetWriteOptions provides control over the written dataset, including options like the output directory, file naming scheme, and so on.

Write example:

640/// \brief An example showing a write node
641/// \param file_path The destination to write to
642///
643/// Scan-Filter-Write
644/// This example shows how scan node can be used to load the data
645/// and after processing how it can be written to disk.
646arrow::Status ScanFilterWriteExample(const std::string& file_path) {
647  ARROW_ASSIGN_OR_RAISE(std::shared_ptr<arrow::dataset::Dataset> dataset, GetDataset());
648
649  auto options = std::make_shared<arrow::dataset::ScanOptions>();
650  // empty projection
651  options->projection = cp::project({}, {});
652
653  auto scan_node_options = arrow::dataset::ScanNodeOptions{dataset, options};
654
655  cp::Declaration scan{"scan", std::move(scan_node_options)};
656
657  arrow::AsyncGenerator<std::optional<cp::ExecBatch>> sink_gen;
658
659  std::string root_path = "";
660  std::string uri = "file://" + file_path;
661  ARROW_ASSIGN_OR_RAISE(std::shared_ptr<arrow::fs::FileSystem> filesystem,
662                        arrow::fs::FileSystemFromUri(uri, &root_path));
663
664  auto base_path = root_path + "/parquet_dataset";
665  // Uncomment the following line, if run repeatedly
666  // ARROW_RETURN_NOT_OK(filesystem->DeleteDirContents(base_path));
667  ARROW_RETURN_NOT_OK(filesystem->CreateDir(base_path));
668
669  // The partition schema determines which fields are part of the partitioning.
670  auto partition_schema = arrow::schema({arrow::field("a", arrow::int32())});
671  // We'll use Hive-style partitioning,
672  // which creates directories with "key=value" pairs.
673
674  auto partitioning =
675      std::make_shared<arrow::dataset::HivePartitioning>(partition_schema);
676  // We'll write Parquet files.
677  auto format = std::make_shared<arrow::dataset::ParquetFileFormat>();
678
679  arrow::dataset::FileSystemDatasetWriteOptions write_options;
680  write_options.file_write_options = format->DefaultWriteOptions();
681  write_options.filesystem = filesystem;
682  write_options.base_dir = base_path;
683  write_options.partitioning = partitioning;
684  write_options.basename_template = "part{i}.parquet";
685
686  arrow::dataset::WriteNodeOptions write_node_options{write_options};
687
688  cp::Declaration write{"write", {std::move(scan)}, std::move(write_node_options)};
689
690  // Since the write node has no output we simply run the plan to completion and the
691  // data should be written
692  ARROW_RETURN_NOT_OK(cp::DeclarationToStatus(std::move(write)));
693
694  std::cout << "Dataset written to " << base_path << std::endl;
695  return arrow::Status::OK();
696}

union

union merges multiple data streams with the same schema into one, similar to a SQL UNION ALL clause.

The following example demonstrates how this can be achieved using two data sources.

Union example:

702/// \brief An example showing a union node
703///
704/// Source-Union-Table
705/// This example shows how a union operation can be applied on two
706/// data sources. The output is collected into a table.
707arrow::Status SourceUnionSinkExample() {
708  ARROW_ASSIGN_OR_RAISE(auto basic_data, MakeBasicBatches());
709
710  cp::Declaration lhs{"source",
711                      cp::SourceNodeOptions{basic_data.schema, basic_data.gen()}};
712  lhs.label = "lhs";
713  cp::Declaration rhs{"source",
714                      cp::SourceNodeOptions{basic_data.schema, basic_data.gen()}};
715  rhs.label = "rhs";
716  cp::Declaration union_plan{
717      "union", {std::move(lhs), std::move(rhs)}, cp::ExecNodeOptions{}};
718
719  return ExecutePlanAndCollectAsTable(std::move(union_plan));
720}

hash_join

hash_join operation provides the relational algebra operation, join using hash-based algorithm. arrow::compute::HashJoinNodeOptions contains the options required in defining a join. The hash_join supports left/right/full semi/anti/outerjoins. Also the join-key (i.e. the column(s) to join on), and suffixes (i.e a suffix term like “_x” which can be appended as a suffix for column names duplicated in both left and right relations.) can be set via the the join options. Read more on hash-joins.

Hash-Join example:

581/// \brief An example showing a hash join node
582///
583/// Source-HashJoin-Table
584/// This example shows how source node gets the data and how a self-join
585/// is applied on the data. The join options are configurable. The output
586/// is collected into a table.
587arrow::Status SourceHashJoinSinkExample() {
588  ARROW_ASSIGN_OR_RAISE(auto input, MakeGroupableBatches());
589
590  cp::Declaration left{"source", cp::SourceNodeOptions{input.schema, input.gen()}};
591  cp::Declaration right{"source", cp::SourceNodeOptions{input.schema, input.gen()}};
592
593  cp::HashJoinNodeOptions join_opts{
594      cp::JoinType::INNER,
595      /*left_keys=*/{"str"},
596      /*right_keys=*/{"str"}, cp::literal(true), "l_", "r_"};
597
598  cp::Declaration hashjoin{
599      "hashjoin", {std::move(left), std::move(right)}, std::move(join_opts)};
600
601  return ExecutePlanAndCollectAsTable(std::move(hashjoin));
602}

Summary

There are examples of these nodes which can be found in cpp/examples/arrow/execution_plan_documentation_examples.cc in the Arrow source.

Complete Example:

 19#include <arrow/array.h>
 20#include <arrow/builder.h>
 21
 22#include <arrow/compute/api.h>
 23#include <arrow/compute/api_vector.h>
 24#include <arrow/compute/cast.h>
 25#include <arrow/compute/exec/exec_plan.h>
 26
 27#include <arrow/csv/api.h>
 28
 29#include <arrow/dataset/dataset.h>
 30#include <arrow/dataset/file_base.h>
 31#include <arrow/dataset/file_parquet.h>
 32#include <arrow/dataset/plan.h>
 33#include <arrow/dataset/scanner.h>
 34
 35#include <arrow/io/interfaces.h>
 36#include <arrow/io/memory.h>
 37
 38#include <arrow/result.h>
 39#include <arrow/status.h>
 40#include <arrow/table.h>
 41
 42#include <arrow/ipc/api.h>
 43
 44#include <arrow/util/future.h>
 45#include <arrow/util/range.h>
 46#include <arrow/util/thread_pool.h>
 47#include <arrow/util/vector.h>
 48
 49#include <iostream>
 50#include <memory>
 51#include <utility>
 52
 53// Demonstrate various operators in Arrow Streaming Execution Engine
 54
 55namespace cp = ::arrow::compute;
 56
 57constexpr char kSep[] = "******";
 58
 59void PrintBlock(const std::string& msg) {
 60  std::cout << "\n\t" << kSep << " " << msg << " " << kSep << "\n" << std::endl;
 61}
 62
 63template <typename TYPE,
 64          typename = typename std::enable_if<arrow::is_number_type<TYPE>::value |
 65                                             arrow::is_boolean_type<TYPE>::value |
 66                                             arrow::is_temporal_type<TYPE>::value>::type>
 67arrow::Result<std::shared_ptr<arrow::Array>> GetArrayDataSample(
 68    const std::vector<typename TYPE::c_type>& values) {
 69  using ArrowBuilderType = typename arrow::TypeTraits<TYPE>::BuilderType;
 70  ArrowBuilderType builder;
 71  ARROW_RETURN_NOT_OK(builder.Reserve(values.size()));
 72  ARROW_RETURN_NOT_OK(builder.AppendValues(values));
 73  return builder.Finish();
 74}
 75
 76template <class TYPE>
 77arrow::Result<std::shared_ptr<arrow::Array>> GetBinaryArrayDataSample(
 78    const std::vector<std::string>& values) {
 79  using ArrowBuilderType = typename arrow::TypeTraits<TYPE>::BuilderType;
 80  ArrowBuilderType builder;
 81  ARROW_RETURN_NOT_OK(builder.Reserve(values.size()));
 82  ARROW_RETURN_NOT_OK(builder.AppendValues(values));
 83  return builder.Finish();
 84}
 85
 86arrow::Result<std::shared_ptr<arrow::RecordBatch>> GetSampleRecordBatch(
 87    const arrow::ArrayVector array_vector, const arrow::FieldVector& field_vector) {
 88  std::shared_ptr<arrow::RecordBatch> record_batch;
 89  ARROW_ASSIGN_OR_RAISE(auto struct_result,
 90                        arrow::StructArray::Make(array_vector, field_vector));
 91  return record_batch->FromStructArray(struct_result);
 92}
 93
 94/// \brief Create a sample table
 95/// The table's contents will be:
 96/// a,b
 97/// 1,null
 98/// 2,true
 99/// null,true
100/// 3,false
101/// null,true
102/// 4,false
103/// 5,null
104/// 6,false
105/// 7,false
106/// 8,true
107/// \return The created table
108
109arrow::Result<std::shared_ptr<arrow::Table>> GetTable() {
110  auto null_long = std::numeric_limits<int64_t>::quiet_NaN();
111  ARROW_ASSIGN_OR_RAISE(auto int64_array,
112                        GetArrayDataSample<arrow::Int64Type>(
113                            {1, 2, null_long, 3, null_long, 4, 5, 6, 7, 8}));
114
115  arrow::BooleanBuilder boolean_builder;
116  std::shared_ptr<arrow::BooleanArray> bool_array;
117
118  std::vector<uint8_t> bool_values = {false, true,  true,  false, true,
119                                      false, false, false, false, true};
120  std::vector<bool> is_valid = {false, true,  true, true, true,
121                                true,  false, true, true, true};
122
123  ARROW_RETURN_NOT_OK(boolean_builder.Reserve(10));
124
125  ARROW_RETURN_NOT_OK(boolean_builder.AppendValues(bool_values, is_valid));
126
127  ARROW_RETURN_NOT_OK(boolean_builder.Finish(&bool_array));
128
129  auto record_batch =
130      arrow::RecordBatch::Make(arrow::schema({arrow::field("a", arrow::int64()),
131                                              arrow::field("b", arrow::boolean())}),
132                               10, {int64_array, bool_array});
133  ARROW_ASSIGN_OR_RAISE(auto table, arrow::Table::FromRecordBatches({record_batch}));
134  return table;
135}
136
137/// \brief Create a sample dataset
138/// \return An in-memory dataset based on GetTable()
139arrow::Result<std::shared_ptr<arrow::dataset::Dataset>> GetDataset() {
140  ARROW_ASSIGN_OR_RAISE(auto table, GetTable());
141  auto ds = std::make_shared<arrow::dataset::InMemoryDataset>(table);
142  return ds;
143}
144
145arrow::Result<cp::ExecBatch> GetExecBatchFromVectors(
146    const arrow::FieldVector& field_vector, const arrow::ArrayVector& array_vector) {
147  std::shared_ptr<arrow::RecordBatch> record_batch;
148  ARROW_ASSIGN_OR_RAISE(auto res_batch, GetSampleRecordBatch(array_vector, field_vector));
149  cp::ExecBatch batch{*res_batch};
150  return batch;
151}
152
153// (Doc section: BatchesWithSchema Definition)
154struct BatchesWithSchema {
155  std::vector<cp::ExecBatch> batches;
156  std::shared_ptr<arrow::Schema> schema;
157  // This method uses internal arrow utilities to
158  // convert a vector of record batches to an AsyncGenerator of optional batches
159  arrow::AsyncGenerator<std::optional<cp::ExecBatch>> gen() const {
160    auto opt_batches = ::arrow::internal::MapVector(
161        [](cp::ExecBatch batch) { return std::make_optional(std::move(batch)); },
162        batches);
163    arrow::AsyncGenerator<std::optional<cp::ExecBatch>> gen;
164    gen = arrow::MakeVectorGenerator(std::move(opt_batches));
165    return gen;
166  }
167};
168// (Doc section: BatchesWithSchema Definition)
169
170// (Doc section: MakeBasicBatches Definition)
171arrow::Result<BatchesWithSchema> MakeBasicBatches() {
172  BatchesWithSchema out;
173  auto field_vector = {arrow::field("a", arrow::int32()),
174                       arrow::field("b", arrow::boolean())};
175  ARROW_ASSIGN_OR_RAISE(auto b1_int, GetArrayDataSample<arrow::Int32Type>({0, 4}));
176  ARROW_ASSIGN_OR_RAISE(auto b2_int, GetArrayDataSample<arrow::Int32Type>({5, 6, 7}));
177  ARROW_ASSIGN_OR_RAISE(auto b3_int, GetArrayDataSample<arrow::Int32Type>({8, 9, 10}));
178
179  ARROW_ASSIGN_OR_RAISE(auto b1_bool,
180                        GetArrayDataSample<arrow::BooleanType>({false, true}));
181  ARROW_ASSIGN_OR_RAISE(auto b2_bool,
182                        GetArrayDataSample<arrow::BooleanType>({true, false, true}));
183  ARROW_ASSIGN_OR_RAISE(auto b3_bool,
184                        GetArrayDataSample<arrow::BooleanType>({false, true, false}));
185
186  ARROW_ASSIGN_OR_RAISE(auto b1,
187                        GetExecBatchFromVectors(field_vector, {b1_int, b1_bool}));
188  ARROW_ASSIGN_OR_RAISE(auto b2,
189                        GetExecBatchFromVectors(field_vector, {b2_int, b2_bool}));
190  ARROW_ASSIGN_OR_RAISE(auto b3,
191                        GetExecBatchFromVectors(field_vector, {b3_int, b3_bool}));
192
193  out.batches = {b1, b2, b3};
194  out.schema = arrow::schema(field_vector);
195  return out;
196}
197// (Doc section: MakeBasicBatches Definition)
198
199arrow::Result<BatchesWithSchema> MakeSortTestBasicBatches() {
200  BatchesWithSchema out;
201  auto field = arrow::field("a", arrow::int32());
202  ARROW_ASSIGN_OR_RAISE(auto b1_int, GetArrayDataSample<arrow::Int32Type>({1, 3, 0, 2}));
203  ARROW_ASSIGN_OR_RAISE(auto b2_int,
204                        GetArrayDataSample<arrow::Int32Type>({121, 101, 120, 12}));
205  ARROW_ASSIGN_OR_RAISE(auto b3_int,
206                        GetArrayDataSample<arrow::Int32Type>({10, 110, 210, 121}));
207  ARROW_ASSIGN_OR_RAISE(auto b4_int,
208                        GetArrayDataSample<arrow::Int32Type>({51, 101, 2, 34}));
209  ARROW_ASSIGN_OR_RAISE(auto b5_int,
210                        GetArrayDataSample<arrow::Int32Type>({11, 31, 1, 12}));
211  ARROW_ASSIGN_OR_RAISE(auto b6_int,
212                        GetArrayDataSample<arrow::Int32Type>({12, 101, 120, 12}));
213  ARROW_ASSIGN_OR_RAISE(auto b7_int,
214                        GetArrayDataSample<arrow::Int32Type>({0, 110, 210, 11}));
215  ARROW_ASSIGN_OR_RAISE(auto b8_int,
216                        GetArrayDataSample<arrow::Int32Type>({51, 10, 2, 3}));
217
218  ARROW_ASSIGN_OR_RAISE(auto b1, GetExecBatchFromVectors({field}, {b1_int}));
219  ARROW_ASSIGN_OR_RAISE(auto b2, GetExecBatchFromVectors({field}, {b2_int}));
220  ARROW_ASSIGN_OR_RAISE(auto b3,
221                        GetExecBatchFromVectors({field, field}, {b3_int, b8_int}));
222  ARROW_ASSIGN_OR_RAISE(auto b4,
223                        GetExecBatchFromVectors({field, field, field, field},
224                                                {b4_int, b5_int, b6_int, b7_int}));
225  out.batches = {b1, b2, b3, b4};
226  out.schema = arrow::schema({field});
227  return out;
228}
229
230arrow::Result<BatchesWithSchema> MakeGroupableBatches(int multiplicity = 1) {
231  BatchesWithSchema out;
232  auto fields = {arrow::field("i32", arrow::int32()), arrow::field("str", arrow::utf8())};
233  ARROW_ASSIGN_OR_RAISE(auto b1_int, GetArrayDataSample<arrow::Int32Type>({12, 7, 3}));
234  ARROW_ASSIGN_OR_RAISE(auto b2_int, GetArrayDataSample<arrow::Int32Type>({-2, -1, 3}));
235  ARROW_ASSIGN_OR_RAISE(auto b3_int, GetArrayDataSample<arrow::Int32Type>({5, 3, -8}));
236  ARROW_ASSIGN_OR_RAISE(auto b1_str, GetBinaryArrayDataSample<arrow::StringType>(
237                                         {"alpha", "beta", "alpha"}));
238  ARROW_ASSIGN_OR_RAISE(auto b2_str, GetBinaryArrayDataSample<arrow::StringType>(
239                                         {"alpha", "gamma", "alpha"}));
240  ARROW_ASSIGN_OR_RAISE(auto b3_str, GetBinaryArrayDataSample<arrow::StringType>(
241                                         {"gamma", "beta", "alpha"}));
242  ARROW_ASSIGN_OR_RAISE(auto b1, GetExecBatchFromVectors(fields, {b1_int, b1_str}));
243  ARROW_ASSIGN_OR_RAISE(auto b2, GetExecBatchFromVectors(fields, {b2_int, b2_str}));
244  ARROW_ASSIGN_OR_RAISE(auto b3, GetExecBatchFromVectors(fields, {b3_int, b3_str}));
245  out.batches = {b1, b2, b3};
246
247  size_t batch_count = out.batches.size();
248  for (int repeat = 1; repeat < multiplicity; ++repeat) {
249    for (size_t i = 0; i < batch_count; ++i) {
250      out.batches.push_back(out.batches[i]);
251    }
252  }
253
254  out.schema = arrow::schema(fields);
255  return out;
256}
257
258arrow::Status ExecutePlanAndCollectAsTable(cp::Declaration plan) {
259  // collect sink_reader into a Table
260  std::shared_ptr<arrow::Table> response_table;
261  ARROW_ASSIGN_OR_RAISE(response_table, cp::DeclarationToTable(std::move(plan)));
262
263  std::cout << "Results : " << response_table->ToString() << std::endl;
264
265  return arrow::Status::OK();
266}
267
268// (Doc section: Scan Example)
269
270/// \brief An example demonstrating a scan and sink node
271///
272/// Scan-Table
273/// This example shows how scan operation can be applied on a dataset.
274/// There are operations that can be applied on the scan (project, filter)
275/// and the input data can be processed. The output is obtained as a table
276arrow::Status ScanSinkExample() {
277  ARROW_ASSIGN_OR_RAISE(std::shared_ptr<arrow::dataset::Dataset> dataset, GetDataset());
278
279  auto options = std::make_shared<arrow::dataset::ScanOptions>();
280  options->projection = cp::project({}, {});  // create empty projection
281
282  // construct the scan node
283  auto scan_node_options = arrow::dataset::ScanNodeOptions{dataset, options};
284
285  cp::Declaration scan{"scan", std::move(scan_node_options)};
286
287  return ExecutePlanAndCollectAsTable(std::move(scan));
288}
289// (Doc section: Scan Example)
290
291// (Doc section: Source Example)
292
293/// \brief An example demonstrating a source and sink node
294///
295/// Source-Table Example
296/// This example shows how a custom source can be used
297/// in an execution plan. This includes source node using pregenerated
298/// data and collecting it into a table.
299///
300/// This sort of custom souce is often not needed.  In most cases you can
301/// use a scan (for a dataset source) or a source like table_source, array_vector_source,
302/// exec_batch_source, or record_batch_source (for in-memory data)
303arrow::Status SourceSinkExample() {
304  ARROW_ASSIGN_OR_RAISE(auto basic_data, MakeBasicBatches());
305
306  auto source_node_options = cp::SourceNodeOptions{basic_data.schema, basic_data.gen()};
307
308  cp::Declaration source{"source", std::move(source_node_options)};
309
310  return ExecutePlanAndCollectAsTable(std::move(source));
311}
312// (Doc section: Source Example)
313
314// (Doc section: Table Source Example)
315
316/// \brief An example showing a table source node
317///
318/// TableSource-Table Example
319/// This example shows how a table_source can be used
320/// in an execution plan. This includes a table source node
321/// receiving data from a table.  This plan simply collects the
322/// data back into a table but nodes could be added that modify
323/// or transform the data as well (as is shown in later examples)
324arrow::Status TableSourceSinkExample() {
325  ARROW_ASSIGN_OR_RAISE(auto table, GetTable());
326
327  arrow::AsyncGenerator<std::optional<cp::ExecBatch>> sink_gen;
328  int max_batch_size = 2;
329  auto table_source_options = cp::TableSourceNodeOptions{table, max_batch_size};
330
331  cp::Declaration source{"table_source", std::move(table_source_options)};
332
333  return ExecutePlanAndCollectAsTable(std::move(source));
334}
335// (Doc section: Table Source Example)
336
337// (Doc section: Filter Example)
338
339/// \brief An example showing a filter node
340///
341/// Source-Filter-Table
342/// This example shows how a filter can be used in an execution plan,
343/// to filter data from a source. The output from the exeuction plan
344/// is collected into a table.
345arrow::Status ScanFilterSinkExample() {
346  ARROW_ASSIGN_OR_RAISE(std::shared_ptr<arrow::dataset::Dataset> dataset, GetDataset());
347
348  auto options = std::make_shared<arrow::dataset::ScanOptions>();
349  // specify the filter.  This filter removes all rows where the
350  // value of the "a" column is greater than 3.
351  cp::Expression filter_expr = cp::greater(cp::field_ref("a"), cp::literal(3));
352  // set filter for scanner : on-disk / push-down filtering.
353  // This step can be skipped if you are not reading from disk.
354  options->filter = filter_expr;
355  // empty projection
356  options->projection = cp::project({}, {});
357
358  // construct the scan node
359  std::cout << "Initialized Scanning Options" << std::endl;
360
361  auto scan_node_options = arrow::dataset::ScanNodeOptions{dataset, options};
362  std::cout << "Scan node options created" << std::endl;
363
364  cp::Declaration scan{"scan", std::move(scan_node_options)};
365
366  // pipe the scan node into the filter node
367  // Need to set the filter in scan node options and filter node options.
368  // At scan node it is used for on-disk / push-down filtering.
369  // At filter node it is used for in-memory filtering.
370  cp::Declaration filter{
371      "filter", {std::move(scan)}, cp::FilterNodeOptions(std::move(filter_expr))};
372
373  return ExecutePlanAndCollectAsTable(std::move(filter));
374}
375
376// (Doc section: Filter Example)
377
378// (Doc section: Project Example)
379
380/// \brief An example showing a project node
381///
382/// Scan-Project-Table
383/// This example shows how a Scan operation can be used to load the data
384/// into the execution plan, how a project operation can be applied on the
385/// data stream and how the output is collected into a table
386arrow::Status ScanProjectSinkExample() {
387  ARROW_ASSIGN_OR_RAISE(std::shared_ptr<arrow::dataset::Dataset> dataset, GetDataset());
388
389  auto options = std::make_shared<arrow::dataset::ScanOptions>();
390  // projection
391  cp::Expression a_times_2 = cp::call("multiply", {cp::field_ref("a"), cp::literal(2)});
392  options->projection = cp::project({}, {});
393
394  auto scan_node_options = arrow::dataset::ScanNodeOptions{dataset, options};
395
396  cp::Declaration scan{"scan", std::move(scan_node_options)};
397  cp::Declaration project{
398      "project", {std::move(scan)}, cp::ProjectNodeOptions({a_times_2})};
399
400  return ExecutePlanAndCollectAsTable(std::move(project));
401}
402
403// (Doc section: Project Example)
404
405// (Doc section: Scalar Aggregate Example)
406
407/// \brief An example showing an aggregation node to aggregate an entire table
408///
409/// Source-Aggregation-Table
410/// This example shows how an aggregation operation can be applied on a
411/// execution plan resulting in a scalar output. The source node loads the
412/// data and the aggregation (counting unique types in column 'a')
413/// is applied on this data. The output is collected into a table (that will
414/// have exactly one row)
415arrow::Status SourceScalarAggregateSinkExample() {
416  ARROW_ASSIGN_OR_RAISE(auto basic_data, MakeBasicBatches());
417
418  auto source_node_options = cp::SourceNodeOptions{basic_data.schema, basic_data.gen()};
419
420  cp::Declaration source{"source", std::move(source_node_options)};
421  auto aggregate_options =
422      cp::AggregateNodeOptions{/*aggregates=*/{{"sum", nullptr, "a", "sum(a)"}}};
423  cp::Declaration aggregate{
424      "aggregate", {std::move(source)}, std::move(aggregate_options)};
425
426  return ExecutePlanAndCollectAsTable(std::move(aggregate));
427}
428// (Doc section: Scalar Aggregate Example)
429
430// (Doc section: Group Aggregate Example)
431
432/// \brief An example showing an aggregation node to perform a group-by operation
433///
434/// Source-Aggregation-Table
435/// This example shows how an aggregation operation can be applied on a
436/// execution plan resulting in grouped output. The source node loads the
437/// data and the aggregation (counting unique types in column 'a') is
438/// applied on this data. The output is collected into a table that will contain
439/// one row for each unique combination of group keys.
440arrow::Status SourceGroupAggregateSinkExample() {
441  ARROW_ASSIGN_OR_RAISE(auto basic_data, MakeBasicBatches());
442
443  arrow::AsyncGenerator<std::optional<cp::ExecBatch>> sink_gen;
444
445  auto source_node_options = cp::SourceNodeOptions{basic_data.schema, basic_data.gen()};
446
447  cp::Declaration source{"source", std::move(source_node_options)};
448  auto options = std::make_shared<cp::CountOptions>(cp::CountOptions::ONLY_VALID);
449  auto aggregate_options =
450      cp::AggregateNodeOptions{/*aggregates=*/{{"hash_count", options, "a", "count(a)"}},
451                               /*keys=*/{"b"}};
452  cp::Declaration aggregate{
453      "aggregate", {std::move(source)}, std::move(aggregate_options)};
454
455  return ExecutePlanAndCollectAsTable(std::move(aggregate));
456}
457// (Doc section: Group Aggregate Example)
458
459// (Doc section: ConsumingSink Example)
460
461/// \brief An example showing a consuming sink node
462///
463/// Source-Consuming-Sink
464/// This example shows how the data can be consumed within the execution plan
465/// by using a ConsumingSink node. There is no data output from this execution plan.
466arrow::Status SourceConsumingSinkExample() {
467  ARROW_ASSIGN_OR_RAISE(auto basic_data, MakeBasicBatches());
468
469  auto source_node_options = cp::SourceNodeOptions{basic_data.schema, basic_data.gen()};
470
471  cp::Declaration source{"source", std::move(source_node_options)};
472
473  std::atomic<uint32_t> batches_seen{0};
474  arrow::Future<> finish = arrow::Future<>::Make();
475  struct CustomSinkNodeConsumer : public cp::SinkNodeConsumer {
476    CustomSinkNodeConsumer(std::atomic<uint32_t>* batches_seen, arrow::Future<> finish)
477        : batches_seen(batches_seen), finish(std::move(finish)) {}
478
479    arrow::Status Init(const std::shared_ptr<arrow::Schema>& schema,
480                       cp::BackpressureControl* backpressure_control,
481                       cp::ExecPlan* plan) override {
482      // This will be called as the plan is started (before the first call to Consume)
483      // and provides the schema of the data coming into the node, controls for pausing /
484      // resuming input, and a pointer to the plan itself which can be used to access
485      // other utilities such as the thread indexer or async task scheduler.
486      return arrow::Status::OK();
487    }
488
489    arrow::Status Consume(cp::ExecBatch batch) override {
490      (*batches_seen)++;
491      return arrow::Status::OK();
492    }
493
494    arrow::Future<> Finish() override {
495      // Here you can perform whatever (possibly async) cleanup is needed, e.g. closing
496      // output file handles and flushing remaining work
497      return arrow::Future<>::MakeFinished();
498    }
499
500    std::atomic<uint32_t>* batches_seen;
501    arrow::Future<> finish;
502  };
503  std::shared_ptr<CustomSinkNodeConsumer> consumer =
504      std::make_shared<CustomSinkNodeConsumer>(&batches_seen, finish);
505
506  cp::Declaration consuming_sink{"consuming_sink",
507                                 {std::move(source)},
508                                 cp::ConsumingSinkNodeOptions(std::move(consumer))};
509
510  // Since we are consuming the data within the plan there is no output and we simply
511  // run the plan to completion instead of collecting into a table.
512  ARROW_RETURN_NOT_OK(cp::DeclarationToStatus(std::move(consuming_sink)));
513
514  std::cout << "The consuming sink node saw " << batches_seen.load() << " batches"
515            << std::endl;
516  return arrow::Status::OK();
517}
518// (Doc section: ConsumingSink Example)
519
520// (Doc section: OrderBySink Example)
521
522arrow::Status ExecutePlanAndCollectAsTableWithCustomSink(
523    std::shared_ptr<cp::ExecPlan> plan, std::shared_ptr<arrow::Schema> schema,
524    arrow::AsyncGenerator<std::optional<cp::ExecBatch>> sink_gen) {
525  // translate sink_gen (async) to sink_reader (sync)
526  std::shared_ptr<arrow::RecordBatchReader> sink_reader =
527      cp::MakeGeneratorReader(schema, std::move(sink_gen), arrow::default_memory_pool());
528
529  // validate the ExecPlan
530  ARROW_RETURN_NOT_OK(plan->Validate());
531  std::cout << "ExecPlan created : " << plan->ToString() << std::endl;
532  // start the ExecPlan
533  ARROW_RETURN_NOT_OK(plan->StartProducing());
534
535  // collect sink_reader into a Table
536  std::shared_ptr<arrow::Table> response_table;
537
538  ARROW_ASSIGN_OR_RAISE(response_table,
539                        arrow::Table::FromRecordBatchReader(sink_reader.get()));
540
541  std::cout << "Results : " << response_table->ToString() << std::endl;
542
543  // stop producing
544  plan->StopProducing();
545  // plan mark finished
546  auto future = plan->finished();
547  return future.status();
548}
549
550/// \brief An example showing an order-by node
551///
552/// Source-OrderBy-Sink
553/// In this example, the data enters through the source node
554/// and the data is ordered in the sink node. The order can be
555/// ASCENDING or DESCENDING and it is configurable. The output
556/// is obtained as a table from the sink node.
557arrow::Status SourceOrderBySinkExample() {
558  ARROW_ASSIGN_OR_RAISE(std::shared_ptr<cp::ExecPlan> plan,
559                        cp::ExecPlan::Make(*cp::threaded_exec_context()));
560
561  ARROW_ASSIGN_OR_RAISE(auto basic_data, MakeSortTestBasicBatches());
562
563  arrow::AsyncGenerator<std::optional<cp::ExecBatch>> sink_gen;
564
565  auto source_node_options = cp::SourceNodeOptions{basic_data.schema, basic_data.gen()};
566  ARROW_ASSIGN_OR_RAISE(cp::ExecNode * source,
567                        cp::MakeExecNode("source", plan.get(), {}, source_node_options));
568
569  ARROW_RETURN_NOT_OK(cp::MakeExecNode(
570      "order_by_sink", plan.get(), {source},
571      cp::OrderBySinkNodeOptions{
572          cp::SortOptions{{cp::SortKey{"a", cp::SortOrder::Descending}}}, &sink_gen}));
573
574  return ExecutePlanAndCollectAsTableWithCustomSink(plan, basic_data.schema, sink_gen);
575}
576
577// (Doc section: OrderBySink Example)
578
579// (Doc section: HashJoin Example)
580
581/// \brief An example showing a hash join node
582///
583/// Source-HashJoin-Table
584/// This example shows how source node gets the data and how a self-join
585/// is applied on the data. The join options are configurable. The output
586/// is collected into a table.
587arrow::Status SourceHashJoinSinkExample() {
588  ARROW_ASSIGN_OR_RAISE(auto input, MakeGroupableBatches());
589
590  cp::Declaration left{"source", cp::SourceNodeOptions{input.schema, input.gen()}};
591  cp::Declaration right{"source", cp::SourceNodeOptions{input.schema, input.gen()}};
592
593  cp::HashJoinNodeOptions join_opts{
594      cp::JoinType::INNER,
595      /*left_keys=*/{"str"},
596      /*right_keys=*/{"str"}, cp::literal(true), "l_", "r_"};
597
598  cp::Declaration hashjoin{
599      "hashjoin", {std::move(left), std::move(right)}, std::move(join_opts)};
600
601  return ExecutePlanAndCollectAsTable(std::move(hashjoin));
602}
603
604// (Doc section: HashJoin Example)
605
606// (Doc section: KSelect Example)
607
608/// \brief An example showing a select-k node
609///
610/// Source-KSelect
611/// This example shows how K number of elements can be selected
612/// either from the top or bottom. The output node is a modified
613/// sink node where output can be obtained as a table.
614arrow::Status SourceKSelectExample() {
615  ARROW_ASSIGN_OR_RAISE(auto input, MakeGroupableBatches());
616  ARROW_ASSIGN_OR_RAISE(std::shared_ptr<cp::ExecPlan> plan,
617                        cp::ExecPlan::Make(*cp::threaded_exec_context()));
618  arrow::AsyncGenerator<std::optional<cp::ExecBatch>> sink_gen;
619
620  ARROW_ASSIGN_OR_RAISE(
621      cp::ExecNode * source,
622      cp::MakeExecNode("source", plan.get(), {},
623                       cp::SourceNodeOptions{input.schema, input.gen()}));
624
625  cp::SelectKOptions options = cp::SelectKOptions::TopKDefault(/*k=*/2, {"i32"});
626
627  ARROW_RETURN_NOT_OK(cp::MakeExecNode("select_k_sink", plan.get(), {source},
628                                       cp::SelectKSinkNodeOptions{options, &sink_gen}));
629
630  auto schema = arrow::schema(
631      {arrow::field("i32", arrow::int32()), arrow::field("str", arrow::utf8())});
632
633  return ExecutePlanAndCollectAsTableWithCustomSink(plan, schema, sink_gen);
634}
635
636// (Doc section: KSelect Example)
637
638// (Doc section: Write Example)
639
640/// \brief An example showing a write node
641/// \param file_path The destination to write to
642///
643/// Scan-Filter-Write
644/// This example shows how scan node can be used to load the data
645/// and after processing how it can be written to disk.
646arrow::Status ScanFilterWriteExample(const std::string& file_path) {
647  ARROW_ASSIGN_OR_RAISE(std::shared_ptr<arrow::dataset::Dataset> dataset, GetDataset());
648
649  auto options = std::make_shared<arrow::dataset::ScanOptions>();
650  // empty projection
651  options->projection = cp::project({}, {});
652
653  auto scan_node_options = arrow::dataset::ScanNodeOptions{dataset, options};
654
655  cp::Declaration scan{"scan", std::move(scan_node_options)};
656
657  arrow::AsyncGenerator<std::optional<cp::ExecBatch>> sink_gen;
658
659  std::string root_path = "";
660  std::string uri = "file://" + file_path;
661  ARROW_ASSIGN_OR_RAISE(std::shared_ptr<arrow::fs::FileSystem> filesystem,
662                        arrow::fs::FileSystemFromUri(uri, &root_path));
663
664  auto base_path = root_path + "/parquet_dataset";
665  // Uncomment the following line, if run repeatedly
666  // ARROW_RETURN_NOT_OK(filesystem->DeleteDirContents(base_path));
667  ARROW_RETURN_NOT_OK(filesystem->CreateDir(base_path));
668
669  // The partition schema determines which fields are part of the partitioning.
670  auto partition_schema = arrow::schema({arrow::field("a", arrow::int32())});
671  // We'll use Hive-style partitioning,
672  // which creates directories with "key=value" pairs.
673
674  auto partitioning =
675      std::make_shared<arrow::dataset::HivePartitioning>(partition_schema);
676  // We'll write Parquet files.
677  auto format = std::make_shared<arrow::dataset::ParquetFileFormat>();
678
679  arrow::dataset::FileSystemDatasetWriteOptions write_options;
680  write_options.file_write_options = format->DefaultWriteOptions();
681  write_options.filesystem = filesystem;
682  write_options.base_dir = base_path;
683  write_options.partitioning = partitioning;
684  write_options.basename_template = "part{i}.parquet";
685
686  arrow::dataset::WriteNodeOptions write_node_options{write_options};
687
688  cp::Declaration write{"write", {std::move(scan)}, std::move(write_node_options)};
689
690  // Since the write node has no output we simply run the plan to completion and the
691  // data should be written
692  ARROW_RETURN_NOT_OK(cp::DeclarationToStatus(std::move(write)));
693
694  std::cout << "Dataset written to " << base_path << std::endl;
695  return arrow::Status::OK();
696}
697
698// (Doc section: Write Example)
699
700// (Doc section: Union Example)
701
702/// \brief An example showing a union node
703///
704/// Source-Union-Table
705/// This example shows how a union operation can be applied on two
706/// data sources. The output is collected into a table.
707arrow::Status SourceUnionSinkExample() {
708  ARROW_ASSIGN_OR_RAISE(auto basic_data, MakeBasicBatches());
709
710  cp::Declaration lhs{"source",
711                      cp::SourceNodeOptions{basic_data.schema, basic_data.gen()}};
712  lhs.label = "lhs";
713  cp::Declaration rhs{"source",
714                      cp::SourceNodeOptions{basic_data.schema, basic_data.gen()}};
715  rhs.label = "rhs";
716  cp::Declaration union_plan{
717      "union", {std::move(lhs), std::move(rhs)}, cp::ExecNodeOptions{}};
718
719  return ExecutePlanAndCollectAsTable(std::move(union_plan));
720}
721
722// (Doc section: Union Example)
723
724// (Doc section: Table Sink Example)
725
726/// \brief An example showing a table sink node
727///
728/// TableSink Example
729/// This example shows how a table_sink can be used
730/// in an execution plan. This includes a source node
731/// receiving data as batches and the table sink node
732/// which emits the output as a table.
733arrow::Status TableSinkExample() {
734  ARROW_ASSIGN_OR_RAISE(std::shared_ptr<cp::ExecPlan> plan,
735                        cp::ExecPlan::Make(*cp::threaded_exec_context()));
736
737  ARROW_ASSIGN_OR_RAISE(auto basic_data, MakeBasicBatches());
738
739  auto source_node_options = cp::SourceNodeOptions{basic_data.schema, basic_data.gen()};
740
741  ARROW_ASSIGN_OR_RAISE(cp::ExecNode * source,
742                        cp::MakeExecNode("source", plan.get(), {}, source_node_options));
743
744  std::shared_ptr<arrow::Table> output_table;
745  auto table_sink_options = cp::TableSinkNodeOptions{&output_table};
746
747  ARROW_RETURN_NOT_OK(
748      cp::MakeExecNode("table_sink", plan.get(), {source}, table_sink_options));
749  // validate the ExecPlan
750  ARROW_RETURN_NOT_OK(plan->Validate());
751  std::cout << "ExecPlan created : " << plan->ToString() << std::endl;
752  // start the ExecPlan
753  ARROW_RETURN_NOT_OK(plan->StartProducing());
754
755  // Wait for the plan to finish
756  auto finished = plan->finished();
757  RETURN_NOT_OK(finished.status());
758  std::cout << "Results : " << output_table->ToString() << std::endl;
759  return arrow::Status::OK();
760}
761
762// (Doc section: Table Sink Example)
763
764// (Doc section: RecordBatchReaderSource Example)
765
766/// \brief An example showing the usage of a RecordBatchReader as the data source.
767///
768/// RecordBatchReaderSourceSink Example
769/// This example shows how a record_batch_reader_source can be used
770/// in an execution plan. This includes the source node
771/// receiving data from a TableRecordBatchReader.
772
773arrow::Status RecordBatchReaderSourceSinkExample() {
774  ARROW_ASSIGN_OR_RAISE(auto table, GetTable());
775  std::shared_ptr<arrow::RecordBatchReader> reader =
776      std::make_shared<arrow::TableBatchReader>(table);
777  cp::Declaration reader_source{"record_batch_reader_source",
778                                cp::RecordBatchReaderSourceNodeOptions{reader}};
779  return ExecutePlanAndCollectAsTable(std::move(reader_source));
780}
781
782// (Doc section: RecordBatchReaderSource Example)
783
784enum ExampleMode {
785  SOURCE_SINK = 0,
786  TABLE_SOURCE_SINK = 1,
787  SCAN = 2,
788  FILTER = 3,
789  PROJECT = 4,
790  SCALAR_AGGREGATION = 5,
791  GROUP_AGGREGATION = 6,
792  CONSUMING_SINK = 7,
793  ORDER_BY_SINK = 8,
794  HASHJOIN = 9,
795  KSELECT = 10,
796  WRITE = 11,
797  UNION = 12,
798  TABLE_SOURCE_TABLE_SINK = 13,
799  RECORD_BATCH_READER_SOURCE = 14
800};
801
802int main(int argc, char** argv) {
803  if (argc < 3) {
804    // Fake success for CI purposes.
805    return EXIT_SUCCESS;
806  }
807
808  std::string base_save_path = argv[1];
809  int mode = std::atoi(argv[2]);
810  arrow::Status status;
811  // ensure arrow::dataset node factories are in the registry
812  arrow::dataset::internal::Initialize();
813  switch (mode) {
814    case SOURCE_SINK:
815      PrintBlock("Source Sink Example");
816      status = SourceSinkExample();
817      break;
818    case TABLE_SOURCE_SINK:
819      PrintBlock("Table Source Sink Example");
820      status = TableSourceSinkExample();
821      break;
822    case SCAN:
823      PrintBlock("Scan Example");
824      status = ScanSinkExample();
825      break;
826    case FILTER:
827      PrintBlock("Filter Example");
828      status = ScanFilterSinkExample();
829      break;
830    case PROJECT:
831      PrintBlock("Project Example");
832      status = ScanProjectSinkExample();
833      break;
834    case GROUP_AGGREGATION:
835      PrintBlock("Aggregate Example");
836      status = SourceGroupAggregateSinkExample();
837      break;
838    case SCALAR_AGGREGATION:
839      PrintBlock("Aggregate Example");
840      status = SourceScalarAggregateSinkExample();
841      break;
842    case CONSUMING_SINK:
843      PrintBlock("Consuming-Sink Example");
844      status = SourceConsumingSinkExample();
845      break;
846    case ORDER_BY_SINK:
847      PrintBlock("OrderBy Example");
848      status = SourceOrderBySinkExample();
849      break;
850    case HASHJOIN:
851      PrintBlock("HashJoin Example");
852      status = SourceHashJoinSinkExample();
853      break;
854    case KSELECT:
855      PrintBlock("KSelect Example");
856      status = SourceKSelectExample();
857      break;
858    case WRITE:
859      PrintBlock("Write Example");
860      status = ScanFilterWriteExample(base_save_path);
861      break;
862    case UNION:
863      PrintBlock("Union Example");
864      status = SourceUnionSinkExample();
865      break;
866    case TABLE_SOURCE_TABLE_SINK:
867      PrintBlock("TableSink Example");
868      status = TableSinkExample();
869      break;
870    case RECORD_BATCH_READER_SOURCE:
871      PrintBlock("RecordBatchReaderSource Example");
872      status = RecordBatchReaderSourceSinkExample();
873      break;
874    default:
875      break;
876  }
877
878  if (status.ok()) {
879    return EXIT_SUCCESS;
880  } else {
881    std::cout << "Error occurred: " << status.message() << std::endl;
882    return EXIT_FAILURE;
883  }
884}