Streaming, Serialization, and IPC¶
Writing and Reading Streams¶
Arrow defines two types of binary formats for serializing record batches:
Streaming format: for sending an arbitrary length sequence of record batches. The format must be processed from start to end, and does not support random access
File or Random Access format: for serializing a fixed number of record batches. Supports random access, and thus is very useful when used with memory maps
To follow this section, make sure to first read the section on Memory and IO.
Using streams¶
First, let’s create a small record batch:
In [1]: import pyarrow as pa
In [2]: data = [
...: pa.array([1, 2, 3, 4]),
...: pa.array(['foo', 'bar', 'baz', None]),
...: pa.array([True, None, False, True])
...: ]
...:
In [3]: batch = pa.record_batch(data, names=['f0', 'f1', 'f2'])
In [4]: batch.num_rows
Out[4]: 4
In [5]: batch.num_columns
Out[5]: 3
Now, we can begin writing a stream containing some number of these batches. For
this we use RecordBatchStreamWriter
, which can write to a
writeable NativeFile
object or a writeable Python object. For convenience,
this one can be created with new_stream()
:
In [6]: sink = pa.BufferOutputStream()
In [7]: writer = pa.ipc.new_stream(sink, batch.schema)
Here we used an in-memory Arrow buffer stream, but this could have been a socket or some other IO sink.
When creating the StreamWriter
, we pass the schema, since the schema
(column names and types) must be the same for all of the batches sent in this
particular stream. Now we can do:
In [8]: for i in range(5):
...: writer.write_batch(batch)
...:
In [9]: writer.close()
In [10]: buf = sink.getvalue()
In [11]: buf.size
Out[11]: 1984
Now buf
contains the complete stream as an in-memory byte buffer. We can
read such a stream with RecordBatchStreamReader
or the
convenience function pyarrow.ipc.open_stream
:
In [12]: reader = pa.ipc.open_stream(buf)
In [13]: reader.schema
Out[13]:
f0: int64
f1: string
f2: bool
In [14]: batches = [b for b in reader]
In [15]: len(batches)
Out[15]: 5
We can check the returned batches are the same as the original input:
In [16]: batches[0].equals(batch)
Out[16]: True
An important point is that if the input source supports zero-copy reads
(e.g. like a memory map, or pyarrow.BufferReader
), then the returned
batches are also zero-copy and do not allocate any new memory on read.
Writing and Reading Random Access Files¶
The RecordBatchFileWriter
has the same API as
RecordBatchStreamWriter
. You can create one with
new_file()
:
In [17]: sink = pa.BufferOutputStream()
In [18]: writer = pa.ipc.new_file(sink, batch.schema)
In [19]: for i in range(10):
....: writer.write_batch(batch)
....:
In [20]: writer.close()
In [21]: buf = sink.getvalue()
In [22]: buf.size
Out[22]: 4226
The difference between RecordBatchFileReader
and
RecordBatchStreamReader
is that the input source must have a
seek
method for random access. The stream reader only requires read
operations. We can also use the open_file()
method to open a file:
In [23]: reader = pa.ipc.open_file(buf)
Because we have access to the entire payload, we know the number of record batches in the file, and can read any at random:
In [24]: reader.num_record_batches
Out[24]: 10
In [25]: b = reader.get_batch(3)
In [26]: b.equals(batch)
Out[26]: True
Reading from Stream and File Format for pandas¶
The stream and file reader classes have a special read_pandas
method to
simplify reading multiple record batches and converting them to a single
DataFrame output:
In [27]: df = pa.ipc.open_file(buf).read_pandas()
In [28]: df[:5]
Out[28]:
f0 f1 f2
0 1 foo True
1 2 bar None
2 3 baz False
3 4 None True
4 1 foo True
Arbitrary Object Serialization¶
Warning
The custom serialization functionality is deprecated in pyarrow 2.0, and will be removed in a future version.
While the serialization functions in this section utilize the Arrow stream
protocol internally, they do not produce data that is compatible with the
above ipc.open_file
and ipc.open_stream
functions.
For arbitrary objects, you can use the standard library pickle
functionality instead. For pyarrow objects, you can use the IPC
serialization format through the pyarrow.ipc
module, as explained
above.
PyArrow serialization was originally meant to provide a higher-performance
alternative to pickle
thanks to zero-copy semantics. However,
pickle
protocol 5 gained support for zero-copy using out-of-band
buffers, and can be used instead for similar benefits.
In pyarrow
we are able to serialize and deserialize many kinds of Python
objects. As an example, consider a dictionary containing NumPy arrays:
In [29]: import numpy as np
In [30]: data = {
....: i: np.random.randn(500, 500)
....: for i in range(100)
....: }
....:
We use the pyarrow.serialize
function to convert this data to a byte
buffer:
In [31]: buf = pa.serialize(data).to_buffer()
In [32]: type(buf)
Out[32]: pyarrow.lib.Buffer
In [33]: buf.size
Out[33]: 200028928
pyarrow.serialize
creates an intermediate object which can be converted to
a buffer (the to_buffer
method) or written directly to an output stream.
pyarrow.deserialize
converts a buffer-like object back to the original
Python object:
In [34]: restored_data = pa.deserialize(buf)
In [35]: restored_data[0]
Out[35]:
array([[ 0.49244255, 1.21105594, -0.2994447 , ..., 0.05943073,
-0.73581114, -0.14879906],
[ 1.71868822, 0.31377552, 1.92596215, ..., 2.02117021,
0.54729364, 0.68769683],
[ 2.56196675, -1.1533102 , 0.17254526, ..., -0.23074288,
-0.4365765 , -0.5039851 ],
...,
[-0.71109738, 0.8933496 , -1.35032859, ..., -0.67598971,
-0.9984562 , 0.29535714],
[-2.09784929, -0.82186696, -0.7091407 , ..., 0.92998251,
-0.34076424, 0.63681931],
[-0.45660778, -0.08153091, 1.53972414, ..., -1.38193904,
-0.80207952, 0.16652628]])
Serializing Custom Data Types¶
If an unrecognized data type is encountered when serializing an object,
pyarrow
will fall back on using pickle
for converting that type to a
byte string. There may be a more efficient way, though.
Consider a class with two members, one of which is a NumPy array:
class MyData:
def __init__(self, name, data):
self.name = name
self.data = data
We write functions to convert this to and from a dictionary with simpler types:
def _serialize_MyData(val):
return {'name': val.name, 'data': val.data}
def _deserialize_MyData(data):
return MyData(data['name'], data['data']
then, we must register these functions in a SerializationContext
so that
MyData
can be recognized:
context = pa.SerializationContext()
context.register_type(MyData, 'MyData',
custom_serializer=_serialize_MyData,
custom_deserializer=_deserialize_MyData)
Lastly, we use this context as an additional argument to pyarrow.serialize
:
buf = pa.serialize(val, context=context).to_buffer()
restored_val = pa.deserialize(buf, context=context)
The SerializationContext
also has convenience methods serialize
and
deserialize
, so these are equivalent statements:
buf = context.serialize(val).to_buffer()
restored_val = context.deserialize(buf)
Component-based Serialization¶
For serializing Python objects containing some number of NumPy arrays, Arrow
buffers, or other data types, it may be desirable to transport their serialized
representation without having to produce an intermediate copy using the
to_buffer
method. To motivate this, suppose we have a list of NumPy arrays:
In [36]: import numpy as np
In [37]: data = [np.random.randn(10, 10) for i in range(5)]
The call pa.serialize(data)
does not copy the memory inside each of these
NumPy arrays. This serialized representation can be then decomposed into a
dictionary containing a sequence of pyarrow.Buffer
objects containing
metadata for each array and references to the memory inside the arrays. To do
this, use the to_components
method:
In [38]: serialized = pa.serialize(data)
In [39]: components = serialized.to_components()
The particular details of the output of to_components
are not too
important. The objects in the 'data'
field are pyarrow.Buffer
objects,
which are zero-copy convertible to Python memoryview
objects:
In [40]: memoryview(components['data'][0])
Out[40]: <memory at 0x7f229ce25b80>
A memoryview can be converted back to a Arrow Buffer
with
pyarrow.py_buffer
:
In [41]: mv = memoryview(components['data'][0])
In [42]: buf = pa.py_buffer(mv)
An object can be reconstructed from its component-based representation using
deserialize_components
:
In [43]: restored_data = pa.deserialize_components(components)
In [44]: restored_data[0]
Out[44]:
array([[-0.04082833, 1.5931644 , -0.54795181, 0.1513031 , -0.38405015,
-1.46843747, -0.15454 , 0.49413859, -0.13732229, 1.87712787],
[ 0.58046073, -2.01522887, -0.01129027, -1.85506948, -0.32669942,
-1.12377249, 0.24472816, -1.01137582, -0.13463316, 1.7238336 ],
[ 0.84304783, 0.70504069, 1.34958276, -0.44033389, -0.14178378,
1.81212239, 0.77611182, 0.03039864, -1.22193348, -1.12937911],
[ 1.79949928, 0.27856612, -1.03485509, -0.52875097, -0.32142208,
0.6376553 , 0.85744819, -1.71241137, -0.42583743, -0.73106645],
[ 0.39813642, 0.21384825, -0.29972528, 1.36170138, -0.67967471,
0.50030653, 1.57083584, 0.7967523 , -2.16637974, -0.02040292],
[ 0.84347389, -0.08820944, -0.87367026, 0.41110517, 0.21592731,
0.33695053, 0.24298075, -0.0986097 , 0.68849526, 0.18221197],
[-0.40805164, 1.56594465, -0.8631584 , 0.59521914, 0.04174698,
1.85152586, 0.89225671, 0.85152356, 0.56464197, 0.70283898],
[ 0.13087241, -0.42750993, 0.28348359, 0.89470292, -0.34630324,
1.58637605, -0.116677 , 0.1659529 , -1.24896223, 0.65027877],
[-0.89460608, -0.09812297, -0.0859934 , 0.09797761, -1.52705526,
-0.32981627, -0.08636818, 0.35751005, -1.54055739, 1.07662817],
[ 0.09475242, 0.78434653, -0.3182099 , 0.39907456, 0.64034588,
0.15509231, -0.82966682, 0.57816421, -0.82722166, -1.04455861]])
deserialize_components
is also available as a method on
SerializationContext
objects.