In-Memory Data Model

Apache Arrow defines columnar array data structures by composing type metadata with memory buffers, like the ones explained in the documentation on Memory and IO. These data structures are exposed in Python through a series of interrelated classes:

  • Type Metadata: Instances of pyarrow.DataType, which describe a logical array type
  • Schemas: Instances of pyarrow.Schema, which describe a named collection of types. These can be thought of as the column types in a table-like object.
  • Arrays: Instances of pyarrow.Array, which are atomic, contiguous columnar data structures composed from Arrow Buffer objects
  • Record Batches: Instances of pyarrow.RecordBatch, which are a collection of Array objects with a particular Schema
  • Tables: Instances of pyarrow.Table, a logical table data structure in which each column consists of one or more pyarrow.Array objects of the same type.

We will examine these in the sections below in a series of examples.

Type Metadata

Apache Arrow defines language agnostic column-oriented data structures for array data. These include:

  • Fixed-length primitive types: numbers, booleans, date and times, fixed size binary, decimals, and other values that fit into a given number
  • Variable-length primitive types: binary, string
  • Nested types: list, struct, and union
  • Dictionary type: An encoded categorical type (more on this later)

Each logical data type in Arrow has a corresponding factory function for creating an instance of that type object in Python:

In [1]: import pyarrow as pa

In [2]: t1 = pa.int32()

In [3]: t2 = pa.string()

In [4]: t3 = pa.binary()

In [5]: t4 = pa.binary(10)

In [6]: t5 = pa.timestamp('ms')

In [7]: t1
Out[7]: DataType(int32)

In [8]: print(t1)
int32

In [9]: print(t4)
fixed_size_binary[10]

In [10]: print(t5)
timestamp[ms]

We use the name logical type because the physical storage may be the same for one or more types. For example, int64, float64, and timestamp[ms] all occupy 64 bits per value.

These objects are metadata; they are used for describing the data in arrays, schemas, and record batches. In Python, they can be used in functions where the input data (e.g. Python objects) may be coerced to more than one Arrow type.

The Field type is a type plus a name and optional user-defined metadata:

In [11]: f0 = pa.field('int32_field', t1)

In [12]: f0
Out[12]: pyarrow.Field<int32_field: int32>

In [13]: f0.name
Out[13]: 'int32_field'

In [14]: f0.type
Out[14]: DataType(int32)

Arrow supports nested value types like list, struct, and union. When creating these, you must pass types or fields to indicate the data types of the types’ children. For example, we can define a list of int32 values with:

In [15]: t6 = pa.list_(t1)

In [16]: t6
Out[16]: ListType(list<item: int32>)

A struct is a collection of named fields:

In [17]: fields = [
   ....:     pa.field('s0', t1),
   ....:     pa.field('s1', t2),
   ....:     pa.field('s2', t4),
   ....:     pa.field('s3', t6)
   ....: ]
   ....: 

In [18]: t7 = pa.struct(fields)

In [19]: print(t7)
struct<s0: int32, s1: string, s2: fixed_size_binary[10], s3: list<item: int32>>

See Data Types API for a full listing of data type functions.

Schemas

The Schema type is similar to the struct array type; it defines the column names and types in a record batch or table data structure. The pyarrow.schema factory function makes new Schema objects in Python:

In [20]: fields = [
   ....:     pa.field('s0', t1),
   ....:     pa.field('s1', t2),
   ....:     pa.field('s2', t4),
   ....:     pa.field('s3', t6)
   ....: ]
   ....: 

In [21]: my_schema = pa.schema(fields)

In [22]: my_schema
Out[22]: 
s0: int32
s1: string
s2: fixed_size_binary[10]
s3: list<item: int32>
  child 0, item: int32

In some applications, you may not create schemas directly, only using the ones that are embedded in IPC messages.

Arrays

For each data type, there is an accompanying array data structure for holding memory buffers that define a single contiguous chunk of columnar array data. When you are using PyArrow, this data may come from IPC tools, though it can also be created from various types of Python sequences (lists, NumPy arrays, pandas data).

A simple way to create arrays is with pyarrow.array, which is similar to the numpy.array function:

In [23]: arr = pa.array([1, 2, None, 3])

In [24]: arr
Out[24]: 
<pyarrow.lib.Int64Array object at 0x7f7f67592f98>
[
  1,
  2,
  NA,
  3
]

The array’s type attribute is the corresponding piece of type metadata:

In [25]: arr.type
Out[25]: DataType(int64)

Each in-memory array has a known length and null count (which will be 0 if there are no null values):

In [26]: len(arr)
Out[26]: 4

In [27]: arr.null_count
Out[27]: 1

Scalar values can be selected with normal indexing. pyarrow.array converts None values to Arrow nulls; we return the special pyarrow.NA value for nulls:

In [28]: arr[0]
Out[28]: 1

In [29]: arr[2]
Out[29]: NA

Arrow data is immutable, so values can be selected but not assigned.

Arrays can be sliced without copying:

In [30]: arr[3]
Out[30]: 3

pyarrow.array can create simple nested data structures like lists:

In [31]: nested_arr = pa.array([[], None, [1, 2], [None, 1]])

In [32]: print(nested_arr.type)
list<item: int64>

Dictionary Arrays

The Dictionary type in PyArrow is a special array type that is similar to a factor in R or a pandas.Categorical. It enables one or more record batches in a file or stream to transmit integer indices referencing a shared dictionary containing the distinct values in the logical array. This is particularly often used with strings to save memory and improve performance.

The way that dictionaries are handled in the Apache Arrow format and the way they appear in C++ and Python is slightly different. We define a special DictionaryArray type with a corresponding dictionary type. Let’s consider an example:

In [33]: indices = pa.array([0, 1, 0, 1, 2, 0, None, 2])

In [34]: dictionary = pa.array(['foo', 'bar', 'baz'])

In [35]: dict_array = pa.DictionaryArray.from_arrays(indices, dictionary)

In [36]: dict_array
Out[36]: 
<pyarrow.lib.DictionaryArray object at 0x7f7f672f8b88>
[
  'foo',
  'bar',
  'foo',
  'bar',
  'baz',
  'foo',
  NA,
  'baz'
]

Here we have:

In [37]: print(dict_array.type)
dictionary<values=string, indices=int64, ordered=0>

In [38]: dict_array.indices
Out[38]: 
<pyarrow.lib.Int64Array object at 0x7f7f672bda48>
[
  0,
  1,
  0,
  1,
  2,
  0,
  NA,
  2
]

In [39]: dict_array.dictionary
Out[39]: 
<pyarrow.lib.StringArray object at 0x7f7f672bd9f8>
[
  'foo',
  'bar',
  'baz'
]

When using DictionaryArray with pandas, the analogue is pandas.Categorical (more on this later):

In [40]: dict_array.to_pandas()
Out[40]: 
[foo, bar, foo, bar, baz, foo, NaN, baz]
Categories (3, object): [foo, bar, baz]

Record Batches

A Record Batch in Apache Arrow is a collection of equal-length array instances. Let’s consider a collection of arrays:

In [41]: data = [
   ....:     pa.array([1, 2, 3, 4]),
   ....:     pa.array(['foo', 'bar', 'baz', None]),
   ....:     pa.array([True, None, False, True])
   ....: ]
   ....: 

A record batch can be created from this list of arrays using RecordBatch.from_arrays:

In [42]: batch = pa.RecordBatch.from_arrays(data, ['f0', 'f1', 'f2'])

In [43]: batch.num_columns
Out[43]: 3

In [44]: batch.num_rows
Out[44]: 4

In [45]: batch.schema
Out[45]: 
f0: int64
f1: string
f2: bool

In [46]: batch[1]
Out[46]: 
<pyarrow.lib.StringArray object at 0x7f7f672df228>
[
  'foo',
  'bar',
  'baz',
  NA
]

A record batch can be sliced without copying memory like an array:

In [47]: batch2 = batch.slice(1, 3)

In [48]: batch2[1]
Out[48]: 
<pyarrow.lib.StringArray object at 0x7f7f672dff48>
[
  'bar',
  'baz',
  NA
]

Tables

The PyArrow Table type is not part of the Apache Arrow specification, but is rather a tool to help with wrangling multiple record batches and array pieces as a single logical dataset. As a relevant example, we may receive multiple small record batches in a socket stream, then need to concatenate them into contiguous memory for use in NumPy or pandas. The Table object makes this efficient without requiring additional memory copying.

Considering the record batch we created above, we can create a Table containing one or more copies of the batch using Table.from_batches:

In [49]: batches = [batch] * 5

In [50]: table = pa.Table.from_batches(batches)

In [51]: table
Out[51]: 
pyarrow.Table
f0: int64
f1: string
f2: bool

In [52]: table.num_rows
Out[52]: 20

The table’s columns are instances of Column, which is a container for one or more arrays of the same type.

In [53]: c = table[0]

In [54]: c
Out[54]: 
<pyarrow.lib.Column object at 0x7f7f6730e090>
chunk 0: <pyarrow.lib.Int64Array object at 0x7f7f6727c7c8>
[
  1,
  2,
  3,
  4
]
chunk 1: <pyarrow.lib.Int64Array object at 0x7f7f6727c818>
[
  1,
  2,
  3,
  4
]
chunk 2: <pyarrow.lib.Int64Array object at 0x7f7f6727c868>
[
  1,
  2,
  3,
  4
]
chunk 3: <pyarrow.lib.Int64Array object at 0x7f7f6727c8b8>
[
  1,
  2,
  3,
  4
]
chunk 4: <pyarrow.lib.Int64Array object at 0x7f7f6727c908>
[
  1,
  2,
  3,
  4
]

In [55]: c.data
Out[55]: <pyarrow.lib.ChunkedArray at 0x7f7f6730e4e0>

In [56]: c.data.num_chunks
Out[56]: 5

In [57]: c.data.chunk(0)
Out[57]: 
<pyarrow.lib.Int64Array object at 0x7f7f6727c9a8>
[
  1,
  2,
  3,
  4
]

As you’ll see in the pandas section, we can convert thee objects to contiguous NumPy arrays for use in pandas:

In [58]: c.to_pandas()
Out[58]: 
0     1
1     2
2     3
3     4
4     1
5     2
6     3
7     4
8     1
9     2
10    3
11    4
12    1
13    2
14    3
15    4
16    1
17    2
18    3
19    4
Name: f0, dtype: int64

Custom Schema and Field Metadata

TODO