Streaming, Serialization, and IPC

Writing and Reading Streams

Arrow defines two types of binary formats for serializing record batches:

  • Streaming format: for sending an arbitrary length sequence of record batches. The format must be processed from start to end, and does not support random access

  • File or Random Access format: for serializing a fixed number of record batches. Supports random access, and thus is very useful when used with memory maps

To follow this section, make sure to first read the section on Memory and IO.

Using streams

First, let’s create a small record batch:

In [1]: import pyarrow as pa

In [2]: data = [
   ...:     pa.array([1, 2, 3, 4]),
   ...:     pa.array(['foo', 'bar', 'baz', None]),
   ...:     pa.array([True, None, False, True])
   ...: ]
   ...: 

In [3]: batch = pa.record_batch(data, names=['f0', 'f1', 'f2'])

In [4]: batch.num_rows
Out[4]: 4

In [5]: batch.num_columns
Out[5]: 3

Now, we can begin writing a stream containing some number of these batches. For this we use RecordBatchStreamWriter, which can write to a writeable NativeFile object or a writeable Python object. For convenience, this one can be created with new_stream():

In [6]: sink = pa.BufferOutputStream()

In [7]: with pa.ipc.new_stream(sink, batch.schema) as writer:
   ...:    for i in range(5):
   ...:       writer.write_batch(batch)
   ...: 

Here we used an in-memory Arrow buffer stream (sink), but this could have been a socket or some other IO sink.

When creating the StreamWriter, we pass the schema, since the schema (column names and types) must be the same for all of the batches sent in this particular stream. Now we can do:

In [8]: buf = sink.getvalue()

In [9]: buf.size
Out[9]: 1984

Now buf contains the complete stream as an in-memory byte buffer. We can read such a stream with RecordBatchStreamReader or the convenience function pyarrow.ipc.open_stream:

In [10]: with pa.ipc.open_stream(buf) as reader:
   ....:       schema = reader.schema
   ....:       batches = [b for b in reader]
   ....: 

In [11]: schema
Out[11]: 
f0: int64
f1: string
f2: bool

In [12]: len(batches)
Out[12]: 5

We can check the returned batches are the same as the original input:

In [13]: batches[0].equals(batch)
Out[13]: True

An important point is that if the input source supports zero-copy reads (e.g. like a memory map, or pyarrow.BufferReader), then the returned batches are also zero-copy and do not allocate any new memory on read.

Writing and Reading Random Access Files

The RecordBatchFileWriter has the same API as RecordBatchStreamWriter. You can create one with new_file():

In [14]: sink = pa.BufferOutputStream()

In [15]: with pa.ipc.new_file(sink, batch.schema) as writer:
   ....:    for i in range(10):
   ....:       writer.write_batch(batch)
   ....: 

In [16]: buf = sink.getvalue()

In [17]: buf.size
Out[17]: 4226

The difference between RecordBatchFileReader and RecordBatchStreamReader is that the input source must have a seek method for random access. The stream reader only requires read operations. We can also use the open_file() method to open a file:

In [18]: with pa.ipc.open_file(buf) as reader:
   ....:    num_record_batches = reader.num_record_batches
   ....: 

In [19]: b = reader.get_batch(3)

Because we have access to the entire payload, we know the number of record batches in the file, and can read any at random.

In [20]: num_record_batches
Out[20]: 10

In [21]: b.equals(batch)
Out[21]: True

Reading from Stream and File Format for pandas

The stream and file reader classes have a special read_pandas method to simplify reading multiple record batches and converting them to a single DataFrame output:

In [22]: with pa.ipc.open_file(buf) as reader:
   ....:    df = reader.read_pandas()
   ....: 

In [23]: df[:5]
Out[23]: 
   f0    f1     f2
0   1   foo   True
1   2   bar   None
2   3   baz  False
3   4  None   True
4   1   foo   True

Efficiently Writing and Reading Arrow Data

Being optimized for zero copy and memory mapped data, Arrow allows to easily read and write arrays consuming the minimum amount of resident memory.

When writing and reading raw Arrow data, we can use the Arrow File Format or the Arrow Streaming Format.

To dump an array to file, you can use the new_file() which will provide a new RecordBatchFileWriter instance that can be used to write batches of data to that file.

For example to write an array of 10M integers, we could write it in 1000 chunks of 10000 entries:

In [24]: BATCH_SIZE = 10000

In [25]: NUM_BATCHES = 1000

In [26]: schema = pa.schema([pa.field('nums', pa.int32())])

In [27]: with pa.OSFile('bigfile.arrow', 'wb') as sink:
   ....:    with pa.ipc.new_file(sink, schema) as writer:
   ....:       for row in range(NUM_BATCHES):
   ....:             batch = pa.record_batch([pa.array(range(BATCH_SIZE), type=pa.int32())], schema)
   ....:             writer.write(batch)
   ....: 

record batches support multiple columns, so in practice we always write the equivalent of a Table.

Writing in batches is effective because we in theory need to keep in memory only the current batch we are writing. But when reading back, we can be even more effective by directly mapping the data from disk and avoid allocating any new memory on read.

Under normal conditions, reading back our file will consume a few hundred megabytes of memory:

In [28]: with pa.OSFile('bigfile.arrow', 'rb') as source:
   ....:    loaded_array = pa.ipc.open_file(source).read_all()
   ....: 

In [29]: print("LEN:", len(loaded_array))
LEN: 10000000

In [30]: print("RSS: {}MB".format(pa.total_allocated_bytes() >> 20))
RSS: 38MB

To more efficiently read big data from disk, we can memory map the file, so that Arrow can directly reference the data mapped from disk and avoid having to allocate its own memory. In such case the operating system will be able to page in the mapped memory lazily and page it out without any write back cost when under pressure, allowing to more easily read arrays bigger than the total memory.

In [31]: with pa.memory_map('bigfile.arrow', 'rb') as source:
   ....:    loaded_array = pa.ipc.open_file(source).read_all()
   ....: 

In [32]: print("LEN:", len(loaded_array))
LEN: 10000000

In [33]: print("RSS: {}MB".format(pa.total_allocated_bytes() >> 20))
RSS: 0MB

Note

Other high level APIs like read_table() also provide a memory_map option. But in those cases, the memory mapping can’t help with reducing resident memory consumption. See Reading Parquet and Memory Mapping for details.

Arbitrary Object Serialization

Warning

The custom serialization functionality is deprecated in pyarrow 2.0, and will be removed in a future version.

While the serialization functions in this section utilize the Arrow stream protocol internally, they do not produce data that is compatible with the above ipc.open_file and ipc.open_stream functions.

For arbitrary objects, you can use the standard library pickle functionality instead. For pyarrow objects, you can use the IPC serialization format through the pyarrow.ipc module, as explained above.

PyArrow serialization was originally meant to provide a higher-performance alternative to pickle thanks to zero-copy semantics. However, pickle protocol 5 gained support for zero-copy using out-of-band buffers, and can be used instead for similar benefits.

In pyarrow we are able to serialize and deserialize many kinds of Python objects. As an example, consider a dictionary containing NumPy arrays:

In [34]: import numpy as np

In [35]: data = {
   ....:     i: np.random.randn(500, 500)
   ....:     for i in range(100)
   ....: }
   ....: 

We use the pyarrow.serialize function to convert this data to a byte buffer:

In [36]: buf = pa.serialize(data).to_buffer()

In [37]: type(buf)
Out[37]: pyarrow.lib.Buffer

In [38]: buf.size
Out[38]: 200028928

pyarrow.serialize creates an intermediate object which can be converted to a buffer (the to_buffer method) or written directly to an output stream.

pyarrow.deserialize converts a buffer-like object back to the original Python object:

In [39]: restored_data = pa.deserialize(buf)

In [40]: restored_data[0]
Out[40]: 
array([[-2.20592621e-01,  1.03680067e+00, -2.27152781e+00, ...,
         1.00960827e-01,  3.23892490e-01, -8.39907075e-01],
       [ 7.53230051e-01, -8.46062298e-01, -5.66552007e-04, ...,
         1.89797413e+00, -9.88647864e-01, -6.07019565e-01],
       [-2.43765786e+00,  2.59369213e-01, -1.24398019e-01, ...,
         2.13087530e-02,  4.32757232e-01,  1.15811242e+00],
       ...,
       [-1.34048360e-01,  2.20731545e-01,  5.26418333e-01, ...,
        -1.73155027e-01, -2.36988013e+00, -5.95570110e-01],
       [-4.56511573e-06,  9.07213121e-01, -9.54651251e-01, ...,
        -8.02932757e-01,  9.61821681e-01,  6.08325718e-02],
       [ 5.94400752e-01,  8.15718025e-01, -2.08306166e+00, ...,
        -2.53815952e+00,  5.46157989e-01, -1.76917692e+00]])

Serializing Custom Data Types

If an unrecognized data type is encountered when serializing an object, pyarrow will fall back on using pickle for converting that type to a byte string. There may be a more efficient way, though.

Consider a class with two members, one of which is a NumPy array:

class MyData:
    def __init__(self, name, data):
        self.name = name
        self.data = data

We write functions to convert this to and from a dictionary with simpler types:

def _serialize_MyData(val):
    return {'name': val.name, 'data': val.data}

def _deserialize_MyData(data):
    return MyData(data['name'], data['data']

then, we must register these functions in a SerializationContext so that MyData can be recognized:

context = pa.SerializationContext()
context.register_type(MyData, 'MyData',
                      custom_serializer=_serialize_MyData,
                      custom_deserializer=_deserialize_MyData)

Lastly, we use this context as an additional argument to pyarrow.serialize:

buf = pa.serialize(val, context=context).to_buffer()
restored_val = pa.deserialize(buf, context=context)

The SerializationContext also has convenience methods serialize and deserialize, so these are equivalent statements:

buf = context.serialize(val).to_buffer()
restored_val = context.deserialize(buf)

Component-based Serialization

For serializing Python objects containing some number of NumPy arrays, Arrow buffers, or other data types, it may be desirable to transport their serialized representation without having to produce an intermediate copy using the to_buffer method. To motivate this, suppose we have a list of NumPy arrays:

In [41]: import numpy as np

In [42]: data = [np.random.randn(10, 10) for i in range(5)]

The call pa.serialize(data) does not copy the memory inside each of these NumPy arrays. This serialized representation can be then decomposed into a dictionary containing a sequence of pyarrow.Buffer objects containing metadata for each array and references to the memory inside the arrays. To do this, use the to_components method:

In [43]: serialized = pa.serialize(data)

In [44]: components = serialized.to_components()

The particular details of the output of to_components are not too important. The objects in the 'data' field are pyarrow.Buffer objects, which are zero-copy convertible to Python memoryview objects:

In [45]: memoryview(components['data'][0])
Out[45]: <memory at 0x7f46b7e1f100>

A memoryview can be converted back to a Arrow Buffer with pyarrow.py_buffer:

In [46]: mv = memoryview(components['data'][0])

In [47]: buf = pa.py_buffer(mv)

An object can be reconstructed from its component-based representation using deserialize_components:

In [48]: restored_data = pa.deserialize_components(components)

In [49]: restored_data[0]
Out[49]: 
array([[-0.16226869,  1.5213758 , -0.82508291, -1.14874609,  1.11742172,
         0.04551647,  0.10387406, -1.21818256,  0.08911279, -1.34045615],
       [-0.98971335, -0.95800327, -0.02554351,  0.64100608, -0.93706562,
        -0.53938741,  0.60744363,  0.60278248, -0.34939883, -0.48616462],
       [-0.19091044,  0.18305967, -0.12451155, -0.56222483, -0.45387655,
        -2.069554  , -0.48630579, -1.01665388, -2.36808433, -1.46992813],
       [ 1.25531376, -1.94474851,  0.8083467 , -0.9144361 ,  1.1196704 ,
         0.51084276,  0.36731195,  0.17072472,  0.16927134, -0.38704155],
       [ 0.56401528,  0.12883444,  0.93740081, -0.8049368 , -0.93203886,
         2.95927436, -0.93972088,  0.3551365 , -0.58825862, -1.01422099],
       [ 0.76459761, -0.41049106,  0.07449664, -0.00490244, -0.25335103,
         0.57955406, -0.10193565,  0.85299018,  0.24052013,  0.03473236],
       [ 1.2413463 ,  1.78200954,  0.02071178, -1.34973243,  2.49825681,
         1.03596474, -0.14701814, -0.70973238,  0.21338779, -0.82767671],
       [ 1.76761441, -1.47729421,  0.45893154, -1.45970643,  0.34941249,
        -0.30082802,  0.25731947, -0.1535745 , -1.18072224,  0.61201969],
       [ 0.20191651,  0.45826116,  0.56749678,  0.2103395 ,  1.09382844,
         0.2414023 ,  0.08918079, -0.71285783,  1.21738501,  0.51077583],
       [-0.38869718,  0.38241067, -0.6479257 ,  1.65402299, -1.03161676,
         0.43968159, -0.67721633, -0.26433358,  0.32492745, -1.38434207]])

deserialize_components is also available as a method on SerializationContext objects.