Data Types and In-Memory Data Model¶
Apache Arrow defines columnar array data structures by composing type metadata with memory buffers, like the ones explained in the documentation on Memory and IO. These data structures are exposed in Python through a series of interrelated classes:
Type Metadata: Instances of
pyarrow.DataType
, which describe a logical array typeSchemas: Instances of
pyarrow.Schema
, which describe a named collection of types. These can be thought of as the column types in a table-like object.Arrays: Instances of
pyarrow.Array
, which are atomic, contiguous columnar data structures composed from Arrow Buffer objectsRecord Batches: Instances of
pyarrow.RecordBatch
, which are a collection of Array objects with a particular SchemaTables: Instances of
pyarrow.Table
, a logical table data structure in which each column consists of one or morepyarrow.Array
objects of the same type.
We will examine these in the sections below in a series of examples.
Type Metadata¶
Apache Arrow defines language agnostic column-oriented data structures for array data. These include:
Fixed-length primitive types: numbers, booleans, date and times, fixed size binary, decimals, and other values that fit into a given number
Variable-length primitive types: binary, string
Nested types: list, struct, and union
Dictionary type: An encoded categorical type (more on this later)
Each logical data type in Arrow has a corresponding factory function for creating an instance of that type object in Python:
In [1]: import pyarrow as pa
In [2]: t1 = pa.int32()
In [3]: t2 = pa.string()
In [4]: t3 = pa.binary()
In [5]: t4 = pa.binary(10)
In [6]: t5 = pa.timestamp('ms')
In [7]: t1
Out[7]: DataType(int32)
In [8]: print(t1)
int32
In [9]: print(t4)
fixed_size_binary[10]
In [10]: print(t5)
timestamp[ms]
We use the name logical type because the physical storage may be the
same for one or more types. For example, int64
, float64
, and
timestamp[ms]
all occupy 64 bits per value.
These objects are metadata; they are used for describing the data in arrays, schemas, and record batches. In Python, they can be used in functions where the input data (e.g. Python objects) may be coerced to more than one Arrow type.
The Field
type is a type plus a name and optional
user-defined metadata:
In [11]: f0 = pa.field('int32_field', t1)
In [12]: f0
Out[12]: pyarrow.Field<int32_field: int32>
In [13]: f0.name
Out[13]: 'int32_field'
In [14]: f0.type
Out[14]: DataType(int32)
Arrow supports nested value types like list, struct, and union. When creating these, you must pass types or fields to indicate the data types of the types’ children. For example, we can define a list of int32 values with:
In [15]: t6 = pa.list_(t1)
In [16]: t6
Out[16]: ListType(list<item: int32>)
A struct is a collection of named fields:
In [17]: fields = [
....: pa.field('s0', t1),
....: pa.field('s1', t2),
....: pa.field('s2', t4),
....: pa.field('s3', t6),
....: ]
....:
In [18]: t7 = pa.struct(fields)
In [19]: print(t7)
struct<s0: int32, s1: string, s2: fixed_size_binary[10], s3: list<item: int32>>
For convenience, you can pass (name, type)
tuples directly instead of
Field
instances:
In [20]: t8 = pa.struct([('s0', t1), ('s1', t2), ('s2', t4), ('s3', t6)])
In [21]: print(t8)
struct<s0: int32, s1: string, s2: fixed_size_binary[10], s3: list<item: int32>>
In [22]: t8 == t7
Out[22]: True
See Data Types API for a full listing of data type functions.
Schemas¶
The Schema
type is similar to the struct
array type; it
defines the column names and types in a record batch or table data
structure. The pyarrow.schema()
factory function makes new Schema objects in
Python:
In [23]: my_schema = pa.schema([('field0', t1),
....: ('field1', t2),
....: ('field2', t4),
....: ('field3', t6)])
....:
In [24]: my_schema
Out[24]:
field0: int32
field1: string
field2: fixed_size_binary[10]
field3: list<item: int32>
child 0, item: int32
In some applications, you may not create schemas directly, only using the ones that are embedded in IPC messages.
Arrays¶
For each data type, there is an accompanying array data structure for holding memory buffers that define a single contiguous chunk of columnar array data. When you are using PyArrow, this data may come from IPC tools, though it can also be created from various types of Python sequences (lists, NumPy arrays, pandas data).
A simple way to create arrays is with pyarrow.array
, which is similar to
the numpy.array
function. By default PyArrow will infer the data type
for you:
In [25]: arr = pa.array([1, 2, None, 3])
In [26]: arr
Out[26]:
<pyarrow.lib.Int64Array object at 0x7f0d895ba220>
[
1,
2,
null,
3
]
But you may also pass a specific data type to override type inference:
In [27]: pa.array([1, 2], type=pa.uint16())
Out[27]:
<pyarrow.lib.UInt16Array object at 0x7f0d895ba9a0>
[
1,
2
]
The array’s type
attribute is the corresponding piece of type metadata:
In [28]: arr.type
Out[28]: DataType(int64)
Each in-memory array has a known length and null count (which will be 0 if there are no null values):
In [29]: len(arr)
Out[29]: 4
In [30]: arr.null_count
Out[30]: 1
Scalar values can be selected with normal indexing. pyarrow.array
converts
None
values to Arrow nulls; we return the special pyarrow.NA
value for
nulls:
In [31]: arr[0]
Out[31]: <pyarrow.Int64Scalar: 1>
In [32]: arr[2]
Out[32]: <pyarrow.Int64Scalar: None>
Arrow data is immutable, so values can be selected but not assigned.
Arrays can be sliced without copying:
In [33]: arr[1:3]
Out[33]:
<pyarrow.lib.Int64Array object at 0x7f0d895baee0>
[
2,
null
]
None values and NAN handling¶
As mentioned in the above section, the Python object None
is always
converted to an Arrow null element on the conversion to pyarrow.Array
. For
the float NaN value which is either represented by the Python object
float('nan')
or numpy.nan
we normally convert it to a valid float
value during the conversion. If an integer input is supplied to
pyarrow.array
that contains np.nan
, ValueError
is raised.
To handle better compatibility with Pandas, we support interpreting NaN values as
null elements. This is enabled automatically on all from_pandas
function and
can be enable on the other conversion functions by passing from_pandas=True
as a function parameter.
List arrays¶
pyarrow.array
is able to infer the type of simple nested data structures
like lists:
In [34]: nested_arr = pa.array([[], None, [1, 2], [None, 1]])
In [35]: print(nested_arr.type)
list<item: int64>
Struct arrays¶
For other kinds of nested arrays, such as struct arrays, you currently need to pass the type explicitly. Struct arrays can be initialized from a sequence of Python dicts or tuples:
In [36]: ty = pa.struct([('x', pa.int8()),
....: ('y', pa.bool_())])
....:
In [37]: pa.array([{'x': 1, 'y': True}, {'x': 2, 'y': False}], type=ty)
Out[37]:
<pyarrow.lib.StructArray object at 0x7f0d8987d640>
-- is_valid: all not null
-- child 0 type: int8
[
1,
2
]
-- child 1 type: bool
[
true,
false
]
In [38]: pa.array([(3, True), (4, False)], type=ty)
Out[38]:
<pyarrow.lib.StructArray object at 0x7f0d8987d760>
-- is_valid: all not null
-- child 0 type: int8
[
3,
4
]
-- child 1 type: bool
[
true,
false
]
When initializing a struct array, nulls are allowed both at the struct level and at the individual field level. If initializing from a sequence of Python dicts, a missing dict key is handled as a null value:
In [39]: pa.array([{'x': 1}, None, {'y': None}], type=ty)
Out[39]:
<pyarrow.lib.StructArray object at 0x7f0d8987d4c0>
-- is_valid:
[
true,
false,
true
]
-- child 0 type: int8
[
1,
0,
null
]
-- child 1 type: bool
[
null,
false,
null
]
You can also construct a struct array from existing arrays for each of the struct’s components. In this case, data storage will be shared with the individual arrays, and no copy is involved:
In [40]: xs = pa.array([5, 6, 7], type=pa.int16())
In [41]: ys = pa.array([False, True, True])
In [42]: arr = pa.StructArray.from_arrays((xs, ys), names=('x', 'y'))
In [43]: arr.type
Out[43]: StructType(struct<x: int16, y: bool>)
In [44]: arr
Out[44]:
<pyarrow.lib.StructArray object at 0x7f0d8987d940>
-- is_valid: all not null
-- child 0 type: int16
[
5,
6,
7
]
-- child 1 type: bool
[
false,
true,
true
]
Union arrays¶
The union type represents a nested array type where each value can be one (and only one) of a set of possible types. There are two possible storage types for union arrays: sparse and dense.
In a sparse union array, each of the child arrays has the same length
as the resulting union array. They are adjuncted with a int8
“types”
array that tells, for each value, from which child array it must be
selected:
In [45]: xs = pa.array([5, 6, 7])
In [46]: ys = pa.array([False, False, True])
In [47]: types = pa.array([0, 1, 1], type=pa.int8())
In [48]: union_arr = pa.UnionArray.from_sparse(types, [xs, ys])
In [49]: union_arr.type
Out[49]: SparseUnionType(sparse_union<0: int64=0, 1: bool=1>)
In [50]: union_arr
Out[50]:
<pyarrow.lib.UnionArray object at 0x7f0d8987d9a0>
-- is_valid: all not null
-- type_ids: [
0,
1,
1
]
-- child 0 type: int64
[
5,
6,
7
]
-- child 1 type: bool
[
false,
false,
true
]
In a dense union array, you also pass, in addition to the int8
“types”
array, a int32
“offsets” array that tells, for each value, at
each offset in the selected child array it can be found:
In [51]: xs = pa.array([5, 6, 7])
In [52]: ys = pa.array([False, True])
In [53]: types = pa.array([0, 1, 1, 0, 0], type=pa.int8())
In [54]: offsets = pa.array([0, 0, 1, 1, 2], type=pa.int32())
In [55]: union_arr = pa.UnionArray.from_dense(types, offsets, [xs, ys])
In [56]: union_arr.type
Out[56]: DenseUnionType(dense_union<0: int64=0, 1: bool=1>)
In [57]: union_arr
Out[57]:
<pyarrow.lib.UnionArray object at 0x7f0d89639340>
-- is_valid: all not null
-- type_ids: [
0,
1,
1,
0,
0
]
-- value_offsets: [
0,
0,
1,
1,
2
]
-- child 0 type: int64
[
5,
6,
7
]
-- child 1 type: bool
[
false,
true
]
Dictionary Arrays¶
The Dictionary type in PyArrow is a special array type that is similar to a
factor in R or a pandas.Categorical
. It enables one or more record batches
in a file or stream to transmit integer indices referencing a shared
dictionary containing the distinct values in the logical array. This is
particularly often used with strings to save memory and improve performance.
The way that dictionaries are handled in the Apache Arrow format and the way
they appear in C++ and Python is slightly different. We define a special
DictionaryArray
type with a corresponding dictionary type. Let’s
consider an example:
In [58]: indices = pa.array([0, 1, 0, 1, 2, 0, None, 2])
In [59]: dictionary = pa.array(['foo', 'bar', 'baz'])
In [60]: dict_array = pa.DictionaryArray.from_arrays(indices, dictionary)
In [61]: dict_array
Out[61]:
<pyarrow.lib.DictionaryArray object at 0x7f0d89566890>
-- dictionary:
[
"foo",
"bar",
"baz"
]
-- indices:
[
0,
1,
0,
1,
2,
0,
null,
2
]
Here we have:
In [62]: print(dict_array.type)
dictionary<values=string, indices=int64, ordered=0>
In [63]: dict_array.indices
Out[63]:
<pyarrow.lib.Int64Array object at 0x7f0d896398e0>
[
0,
1,
0,
1,
2,
0,
null,
2
]
In [64]: dict_array.dictionary
Out[64]:
<pyarrow.lib.StringArray object at 0x7f0d89639a00>
[
"foo",
"bar",
"baz"
]
When using DictionaryArray
with pandas, the analogue is
pandas.Categorical
(more on this later):
In [65]: dict_array.to_pandas()
Out[65]:
0 foo
1 bar
2 foo
3 bar
4 baz
5 foo
6 NaN
7 baz
dtype: category
Categories (3, object): ['foo', 'bar', 'baz']
Record Batches¶
A Record Batch in Apache Arrow is a collection of equal-length array instances. Let’s consider a collection of arrays:
In [66]: data = [
....: pa.array([1, 2, 3, 4]),
....: pa.array(['foo', 'bar', 'baz', None]),
....: pa.array([True, None, False, True])
....: ]
....:
A record batch can be created from this list of arrays using
RecordBatch.from_arrays
:
In [67]: batch = pa.RecordBatch.from_arrays(data, ['f0', 'f1', 'f2'])
In [68]: batch.num_columns
Out[68]: 3
In [69]: batch.num_rows
Out[69]: 4
In [70]: batch.schema
Out[70]:
f0: int64
f1: string
f2: bool
In [71]: batch[1]
Out[71]:
<pyarrow.lib.StringArray object at 0x7f0d89639dc0>
[
"foo",
"bar",
"baz",
null
]
A record batch can be sliced without copying memory like an array:
In [72]: batch2 = batch.slice(1, 3)
In [73]: batch2[1]
Out[73]:
<pyarrow.lib.StringArray object at 0x7f0d8957e040>
[
"bar",
"baz",
null
]
Tables¶
The PyArrow Table
type is not part of the Apache Arrow
specification, but is rather a tool to help with wrangling multiple record
batches and array pieces as a single logical dataset. As a relevant example, we
may receive multiple small record batches in a socket stream, then need to
concatenate them into contiguous memory for use in NumPy or pandas. The Table
object makes this efficient without requiring additional memory copying.
Considering the record batch we created above, we can create a Table containing
one or more copies of the batch using Table.from_batches
:
In [74]: batches = [batch] * 5
In [75]: table = pa.Table.from_batches(batches)
In [76]: table
Out[76]:
pyarrow.Table
f0: int64
f1: string
f2: bool
----
f0: [[1,2,3,4],[1,2,3,4],[1,2,3,4],[1,2,3,4],[1,2,3,4]]
f1: [["foo","bar","baz",null],["foo","bar","baz",null],["foo","bar","baz",null],["foo","bar","baz",null],["foo","bar","baz",null]]
f2: [[true,null,false,true],[true,null,false,true],[true,null,false,true],[true,null,false,true],[true,null,false,true]]
In [77]: table.num_rows
Out[77]: 20
The table’s columns are instances of ChunkedArray
, which is a
container for one or more arrays of the same type.
In [78]: c = table[0]
In [79]: c
Out[79]:
<pyarrow.lib.ChunkedArray object at 0x7f0d895054f0>
[
[
1,
2,
3,
4
],
[
1,
2,
3,
4
],
[
1,
2,
3,
4
],
[
1,
2,
3,
4
],
[
1,
2,
3,
4
]
]
In [80]: c.num_chunks
Out[80]: 5
In [81]: c.chunk(0)
Out[81]:
<pyarrow.lib.Int64Array object at 0x7f0d8957e5e0>
[
1,
2,
3,
4
]
As you’ll see in the pandas section, we can convert these objects to contiguous NumPy arrays for use in pandas:
In [82]: c.to_pandas()
Out[82]:
0 1
1 2
2 3
3 4
4 1
5 2
6 3
7 4
8 1
9 2
10 3
11 4
12 1
13 2
14 3
15 4
16 1
17 2
18 3
19 4
Name: f0, dtype: int64
Multiple tables can also be concatenated together to form a single table using
pyarrow.concat_tables
, if the schemas are equal:
In [83]: tables = [table] * 2
In [84]: table_all = pa.concat_tables(tables)
In [85]: table_all.num_rows
Out[85]: 40
In [86]: c = table_all[0]
In [87]: c.num_chunks
Out[87]: 10
This is similar to Table.from_batches
, but uses tables as input instead of
record batches. Record batches can be made into tables, but not the other way
around, so if your data is already in table form, then use
pyarrow.concat_tables
.
Custom Schema and Field Metadata¶
TODO