pyarrow.ipc.RecordBatchStreamReader¶
- class pyarrow.ipc.RecordBatchStreamReader(source, *, options=None, memory_pool=None)[source]¶
Bases:
_RecordBatchStreamReader
Reader for the Arrow streaming binary format.
- Parameters:
- sourcebytes/buffer-like,
pyarrow.NativeFile
, or file-like Python object Either an in-memory buffer, or a readable file object. If you want to use memory map use MemoryMappedFile as source.
- options
pyarrow.ipc.IpcReadOptions
Options for IPC deserialization. If None, default values will be used.
- memory_pool
MemoryPool
, defaultNone
If None, default memory pool is used.
- sourcebytes/buffer-like,
Methods
__init__
(source, *[, options, memory_pool])close
(self)Release any resources associated with the reader.
from_batches
(schema, batches)Create RecordBatchReader from an iterable of batches.
get_next_batch
(self)DEPRECATED: return the next record batch.
read_all
(self)Read all record batches as a pyarrow.Table.
read_next_batch
(self)Read next RecordBatch from the stream.
read_pandas
(self, **options)Read contents of stream to a pandas.DataFrame.
Attributes
Shared schema of the record batches in the stream.
Current IPC read statistics.
- close(self)¶
Release any resources associated with the reader.
- static from_batches(schema, batches)¶
Create RecordBatchReader from an iterable of batches.
- Parameters:
- schema
Schema
The shared schema of the record batches
- batches
Iterable
[RecordBatch
] The batches that this reader will return.
- schema
- Returns:
- readerRecordBatchReader
- get_next_batch(self)¶
DEPRECATED: return the next record batch.
Use read_next_batch instead.
- read_next_batch(self)¶
Read next RecordBatch from the stream.
- Returns:
- Raises:
- StopIteration:
At end of stream.
- read_pandas(self, **options)¶
Read contents of stream to a pandas.DataFrame.
Read all record batches as a pyarrow.Table then convert it to a pandas.DataFrame using Table.to_pandas.
- Parameters:
- **options
Arguments to forward to
Table.to_pandas()
.
- Returns:
- stats¶
Current IPC read statistics.