pyarrow.dataset.Scanner#
- class pyarrow.dataset.Scanner#
Bases:
_Weakrefable
A materialized scan operation with context and options bound.
A scanner is the class that glues the scan tasks, data fragments and data sources together.
- __init__(*args, **kwargs)#
Methods
__init__
(*args, **kwargs)count_rows
(self)Count rows matching the scanner filter.
from_batches
(source, *, Schema schema=None)Create a Scanner from an iterator of batches.
from_dataset
(Dataset dataset, *[, columns, ...])Create Scanner from Dataset,
from_fragment
(Fragment fragment, *, ...[, ...])Create Scanner from Fragment,
head
(self, int num_rows)Load the first N rows of the dataset.
scan_batches
(self)Consume a Scanner in record batches with corresponding fragments.
take
(self, indices)Select rows of data by index.
to_batches
(self)Consume a Scanner in record batches.
to_reader
(self)Consume this scanner as a RecordBatchReader.
to_table
(self)Convert a Scanner into a Table.
Attributes
The schema with which batches will be read from fragments.
The materialized schema of the data, accounting for projections.
- dataset_schema#
The schema with which batches will be read from fragments.
- static from_batches(source, *, Schema schema=None, columns=None, Expression filter=None, int batch_size=_DEFAULT_BATCH_SIZE, int batch_readahead=_DEFAULT_BATCH_READAHEAD, int fragment_readahead=_DEFAULT_FRAGMENT_READAHEAD, FragmentScanOptions fragment_scan_options=None, bool use_threads=True, MemoryPool memory_pool=None)#
Create a Scanner from an iterator of batches.
This creates a scanner which can be used only once. It is intended to support writing a dataset (which takes a scanner) from a source which can be read only once (e.g. a RecordBatchReader or generator).
- Parameters:
- sourceIterator or Arrow-compatible
stream
object The iterator of Batches. This can be a pyarrow RecordBatchReader, any object that implements the Arrow PyCapsule Protocol for streams, or an actual Python iterator of RecordBatches.
- schema
Schema
The schema of the batches (required when passing a Python iterator).
- columns
list
[str
] ordict
[str
,Expression
], defaultNone
The columns to project. This can be a list of column names to include (order and duplicates will be preserved), or a dictionary with {new_column_name: expression} values for more advanced projections.
The list of columns or expressions may use the special fields __batch_index (the index of the batch within the fragment), __fragment_index (the index of the fragment within the dataset), __last_in_fragment (whether the batch is last in fragment), and __filename (the name of the source file or a description of the source fragment).
The columns will be passed down to Datasets and corresponding data fragments to avoid loading, copying, and deserializing columns that will not be required further down the compute chain. By default all of the available columns are projected. Raises an exception if any of the referenced column names does not exist in the dataset’s Schema.
- filter
Expression
, defaultNone
Scan will return only the rows matching the filter. If possible the predicate will be pushed down to exploit the partition information or internal metadata found in the data source, e.g. Parquet statistics. Otherwise filters the loaded RecordBatches before yielding them.
- batch_size
int
, default 131_072 The maximum row count for scanned record batches. If scanned record batches are overflowing memory then this method can be called to reduce their size.
- batch_readahead
int
, default 16 The number of batches to read ahead in a file. This might not work for all file formats. Increasing this number will increase RAM usage but could also improve IO utilization.
- fragment_readahead
int
, default 4 The number of files to read ahead. Increasing this number will increase RAM usage but could also improve IO utilization.
- fragment_scan_options
FragmentScanOptions
, defaultNone
Options specific to a particular scan and fragment type, which can change between different scans of the same dataset.
- use_threadsbool, default
True
If enabled, then maximum parallelism will be used determined by the number of available CPU cores.
- memory_pool
MemoryPool
, defaultNone
For memory allocations, if required. If not specified, uses the default pool.
- sourceIterator or Arrow-compatible
- static from_dataset(Dataset dataset, *, columns=None, filter=None, int batch_size=_DEFAULT_BATCH_SIZE, int batch_readahead=_DEFAULT_BATCH_READAHEAD, int fragment_readahead=_DEFAULT_FRAGMENT_READAHEAD, FragmentScanOptions fragment_scan_options=None, bool use_threads=True, MemoryPool memory_pool=None)#
Create Scanner from Dataset,
- Parameters:
- dataset
Dataset
Dataset to scan.
- columns
list
[str
] ordict
[str
,Expression
], defaultNone
The columns to project. This can be a list of column names to include (order and duplicates will be preserved), or a dictionary with {new_column_name: expression} values for more advanced projections.
The list of columns or expressions may use the special fields __batch_index (the index of the batch within the fragment), __fragment_index (the index of the fragment within the dataset), __last_in_fragment (whether the batch is last in fragment), and __filename (the name of the source file or a description of the source fragment).
The columns will be passed down to Datasets and corresponding data fragments to avoid loading, copying, and deserializing columns that will not be required further down the compute chain. By default all of the available columns are projected. Raises an exception if any of the referenced column names does not exist in the dataset’s Schema.
- filter
Expression
, defaultNone
Scan will return only the rows matching the filter. If possible the predicate will be pushed down to exploit the partition information or internal metadata found in the data source, e.g. Parquet statistics. Otherwise filters the loaded RecordBatches before yielding them.
- batch_size
int
, default 131_072 The maximum row count for scanned record batches. If scanned record batches are overflowing memory then this method can be called to reduce their size.
- batch_readahead
int
, default 16 The number of batches to read ahead in a file. This might not work for all file formats. Increasing this number will increase RAM usage but could also improve IO utilization.
- fragment_readahead
int
, default 4 The number of files to read ahead. Increasing this number will increase RAM usage but could also improve IO utilization.
- fragment_scan_options
FragmentScanOptions
, defaultNone
Options specific to a particular scan and fragment type, which can change between different scans of the same dataset.
- use_threadsbool, default
True
If enabled, then maximum parallelism will be used determined by the number of available CPU cores.
- memory_pool
MemoryPool
, defaultNone
For memory allocations, if required. If not specified, uses the default pool.
- dataset
- static from_fragment(Fragment fragment, *, Schema schema=None, columns=None, Expression filter=None, int batch_size=_DEFAULT_BATCH_SIZE, int batch_readahead=_DEFAULT_BATCH_READAHEAD, int fragment_readahead=_DEFAULT_FRAGMENT_READAHEAD, FragmentScanOptions fragment_scan_options=None, bool use_threads=True, MemoryPool memory_pool=None)#
Create Scanner from Fragment,
- Parameters:
- fragment
Fragment
fragment to scan.
- schema
Schema
, optional The schema of the fragment.
- columns
list
[str
] ordict
[str
,Expression
], defaultNone
The columns to project. This can be a list of column names to include (order and duplicates will be preserved), or a dictionary with {new_column_name: expression} values for more advanced projections.
The list of columns or expressions may use the special fields __batch_index (the index of the batch within the fragment), __fragment_index (the index of the fragment within the dataset), __last_in_fragment (whether the batch is last in fragment), and __filename (the name of the source file or a description of the source fragment).
The columns will be passed down to Datasets and corresponding data fragments to avoid loading, copying, and deserializing columns that will not be required further down the compute chain. By default all of the available columns are projected. Raises an exception if any of the referenced column names does not exist in the dataset’s Schema.
- filter
Expression
, defaultNone
Scan will return only the rows matching the filter. If possible the predicate will be pushed down to exploit the partition information or internal metadata found in the data source, e.g. Parquet statistics. Otherwise filters the loaded RecordBatches before yielding them.
- batch_size
int
, default 131_072 The maximum row count for scanned record batches. If scanned record batches are overflowing memory then this method can be called to reduce their size.
- batch_readahead
int
, default 16 The number of batches to read ahead in a file. This might not work for all file formats. Increasing this number will increase RAM usage but could also improve IO utilization.
- fragment_readahead
int
, default 4 The number of files to read ahead. Increasing this number will increase RAM usage but could also improve IO utilization.
- fragment_scan_options
FragmentScanOptions
, defaultNone
Options specific to a particular scan and fragment type, which can change between different scans of the same dataset.
- use_threadsbool, default
True
If enabled, then maximum parallelism will be used determined by the number of available CPU cores.
- memory_pool
MemoryPool
, defaultNone
For memory allocations, if required. If not specified, uses the default pool.
- fragment
- head(self, int num_rows)#
Load the first N rows of the dataset.
- projected_schema#
The materialized schema of the data, accounting for projections.
This is the schema of any data returned from the scanner.
- scan_batches(self)#
Consume a Scanner in record batches with corresponding fragments.
- Returns:
- record_batchesiterator of
TaggedRecordBatch
- record_batchesiterator of
- take(self, indices)#
Select rows of data by index.
Will only consume as many batches of the underlying dataset as needed. Otherwise, this is equivalent to
to_table().take(indices)
.- Parameters:
- indices
Array
orarray-like
indices of rows to select in the dataset.
- indices
- Returns:
- to_batches(self)#
Consume a Scanner in record batches.
- Returns:
- record_batchesiterator of
RecordBatch
- record_batchesiterator of
- to_reader(self)#
Consume this scanner as a RecordBatchReader.
- Returns:
- RecordBatchReader