Streaming, Serialization, and IPC

Writing and Reading Streams

Arrow defines two types of binary formats for serializing record batches:

  • Streaming format: for sending an arbitrary length sequence of record batches. The format must be processed from start to end, and does not support random access
  • File or Random Access format: for serializing a fixed number of record batches. Supports random access, and thus is very useful when used with memory maps

To follow this section, make sure to first read the section on Memory and IO.

Using streams

First, let’s create a small record batch:

In [1]: import pyarrow as pa

In [2]: data = [
   ...:     pa.array([1, 2, 3, 4]),
   ...:     pa.array(['foo', 'bar', 'baz', None]),
   ...:     pa.array([True, None, False, True])
   ...: ]
   ...: 

In [3]: batch = pa.RecordBatch.from_arrays(data, ['f0', 'f1', 'f2'])

In [4]: batch.num_rows
Out[4]: 4

In [5]: batch.num_columns
Out[5]: 3

Now, we can begin writing a stream containing some number of these batches. For this we use RecordBatchStreamWriter, which can write to a writeable NativeFile object or a writeable Python object:

In [6]: sink = pa.BufferOutputStream()

In [7]: writer = pa.RecordBatchStreamWriter(sink, batch.schema)

Here we used an in-memory Arrow buffer stream, but this could have been a socket or some other IO sink.

When creating the StreamWriter, we pass the schema, since the schema (column names and types) must be the same for all of the batches sent in this particular stream. Now we can do:

In [8]: for i in range(5):
   ...:    writer.write_batch(batch)
   ...: 

In [9]: writer.close()

In [10]: buf = sink.get_result()

In [11]: buf.size
Out[11]: 2388

Now buf contains the complete stream as an in-memory byte buffer. We can read such a stream with RecordBatchStreamReader or the convenience function pyarrow.open_stream:

In [12]: reader = pa.open_stream(buf)

In [13]: reader.schema
Out[13]: 
f0: int64
f1: string
f2: bool
metadata
--------
{}

In [14]: batches = [b for b in reader]

In [15]: len(batches)
Out[15]: 5

We can check the returned batches are the same as the original input:

In [16]: batches[0].equals(batch)
Out[16]: True

An important point is that if the input source supports zero-copy reads (e.g. like a memory map, or pyarrow.BufferReader), then the returned batches are also zero-copy and do not allocate any new memory on read.

Writing and Reading Random Access Files

The RecordBatchFileWriter has the same API as RecordBatchStreamWriter:

In [17]: sink = pa.BufferOutputStream()

In [18]: writer = pa.RecordBatchFileWriter(sink, batch.schema)

In [19]: for i in range(10):
   ....:    writer.write_batch(batch)
   ....: 

In [20]: writer.close()

In [21]: buf = sink.get_result()

In [22]: buf.size
Out[22]: 5042

The difference between RecordBatchFileReader and RecordBatchStreamReader is that the input source must have a seek method for random access. The stream reader only requires read operations. We can also use the pyarrow.open_file method to open a file:

In [23]: reader = pa.open_file(buf)

Because we have access to the entire payload, we know the number of record batches in the file, and can read any at random:

In [24]: reader.num_record_batches
Out[24]: 10

In [25]: b = reader.get_batch(3)

In [26]: b.equals(batch)
Out[26]: True

Reading from Stream and File Format for pandas

The stream and file reader classes have a special read_pandas method to simplify reading multiple record batches and converting them to a single DataFrame output:

In [27]: df = pa.open_file(buf).read_pandas()

In [28]: df[:5]
Out[28]: 
   f0    f1     f2
0   1   foo   True
1   2   bar   None
2   3   baz  False
3   4  None   True
4   1   foo   True

Arbitrary Object Serialization

In pyarrow we are able to serialize and deserialize many kinds of Python objects. While not a complete replacement for the pickle module, these functions can be significantly faster, particular when dealing with collections of NumPy arrays.

As an example, consider a dictionary containing NumPy arrays:

In [29]: import numpy as np

In [30]: data = {
   ....:     i: np.random.randn(500, 500)
   ....:     for i in range(100)
   ....: }
   ....: 

We use the pyarrow.serialize function to convert this data to a byte buffer:

In [31]: buf = pa.serialize(data).to_buffer()

In [32]: type(buf)
Out[32]: pyarrow.lib.Buffer

In [33]: buf.size
Out[33]: 200029480

pyarrow.serialize creates an intermediate object which can be converted to a buffer (the to_buffer method) or written directly to an output stream.

pyarrow.deserialize converts a buffer-like object back to the original Python object:

In [34]: restored_data = pa.deserialize(buf)

In [35]: restored_data[0]
Out[35]: 
array([[ 0.35381923,  0.53793578,  0.90151836, ...,  0.94797932,
         0.32774816, -1.21950683],
       [ 1.03599075,  2.45996329, -0.5973173 , ...,  1.54457256,
         0.63989086,  0.68828891],
       [ 0.10454083,  1.32875505,  2.43756093, ..., -1.17861945,
        -0.57014106,  1.65195681],
       ..., 
       [ 1.09537163,  1.14524725,  0.9057604 , ..., -0.30695854,
        -1.33636352,  0.29628614],
       [ 0.28916841, -0.68369335, -1.04986623, ...,  1.23734746,
         0.27959131,  0.15960512],
       [ 1.06183476, -1.25631287,  0.22069223, ...,  1.88947779,
         0.28856073,  0.72737762]])

When dealing with NumPy arrays, pyarrow.deserialize can be significantly faster than pickle because the resulting arrays are zero-copy references into the input buffer. The larger the arrays, the larger the performance savings.

Consider this example, we have for pyarrow.deserialize

In [36]: %timeit restored_data = pa.deserialize(buf)
921 us +- 7.54 us per loop (mean +- std. dev. of 7 runs, 1000 loops each)

And for pickle:

In [37]: import pickle

In [38]: pickled = pickle.dumps(data)

In [39]: %timeit unpickled_data = pickle.loads(pickled)
39.2 ms +- 94.3 us per loop (mean +- std. dev. of 7 runs, 10 loops each)

We aspire to make these functions a high-speed alternative to pickle for transient serialization in Python big data applications.

Serializing Custom Data Types

If an unrecognized data type is encountered when serializing an object, pyarrow will fall back on using pickle for converting that type to a byte string. There may be a more efficient way, though.

Consider a class with two members, one of which is a NumPy array:

class MyData:
    def __init__(self, name, data):
        self.name = name
        self.data = data

We write functions to convert this to and from a dictionary with simpler types:

def _serialize_MyData(val):
    return {'name': val.name, 'data': val.data}

def _deserialize_MyData(data):
    return MyData(data['name'], data['data']

then, we must register these functions in a SerializationContext so that MyData can be recognized:

context = pa.SerializationContext()
context.register_type(MyData, 'MyData',
                      custom_serializer=_serialize_MyData,
                      custom_deserializer=_deserialize_MyData)

Lastly, we use this context as an additioanl argument to pyarrow.serialize:

buf = pa.serialize(val, context=context).to_buffer()
restored_val = pa.deserialize(buf, context=context)

Feather Format

Feather is a lightweight file-format for data frames that uses the Arrow memory layout for data representation on disk. It was created early in the Arrow project as a proof of concept for fast, language-agnostic data frame storage for Python (pandas) and R.

Compared with Arrow streams and files, Feather has some limitations:

  • Only non-nested data types and categorical (dictionary-encoded) types are supported
  • Supports only a single batch of rows, where general Arrow streams support an arbitrary number
  • Supports limited scalar value types, adequate only for representing typical data found in R and pandas

We would like to continue to innovate in the Feather format, but we must wait for an R implementation for Arrow to mature.

The pyarrow.feather module contains the read and write functions for the format. The input and output are pandas.DataFrame objects:

import pyarrow.feather as feather

feather.write_feather(df, '/path/to/file')
read_df = feather.read_feather('/path/to/file')

read_feather supports multithreaded reads, and may yield faster performance on some files:

read_df = feather.read_feather('/path/to/file', nthreads=4)

These functions can read and write with file-like objects. For example:

with open('/path/to/file', 'wb') as f:
    feather.write_feather(df, f)

with open('/path/to/file', 'rb') as f:
    read_df = feather.read_feather(f)

A file input to read_feather must support seeking.