ArrowReaderOptions

Struct ArrowReaderOptions 

Source
pub struct ArrowReaderOptions {
    skip_arrow_metadata: bool,
    supplied_schema: Option<SchemaRef>,
    pub(crate) page_index_policy: PageIndexPolicy,
    metadata_options: ParquetMetaDataOptions,
    pub(crate) file_decryption_properties: Option<Arc<FileDecryptionProperties>>,
    virtual_columns: Vec<FieldRef>,
}
Expand description

Options that control how ParquetMetaData is read when constructing an Arrow reader.

To use these options, pass them to one of the following methods:

For fine-grained control over metadata loading, use ArrowReaderMetadata::load to load metadata with these options,

See ArrowReaderBuilder for how to configure how the column data is then read from the file, including projection and filter pushdown

Fields§

§skip_arrow_metadata: bool

Should the reader strip any user defined metadata from the Arrow schema

§supplied_schema: Option<SchemaRef>

If provided, used as the schema hint when determining the Arrow schema, otherwise the schema hint is read from the ARROW_SCHEMA_META_KEY

§page_index_policy: PageIndexPolicy

Policy for reading offset and column indexes.

§metadata_options: ParquetMetaDataOptions

Options to control reading of Parquet metadata

§file_decryption_properties: Option<Arc<FileDecryptionProperties>>

If encryption is enabled, the file decryption properties can be provided

§virtual_columns: Vec<FieldRef>

Implementations§

Source§

impl ArrowReaderOptions

Source

pub fn new() -> Self

Create a new ArrowReaderOptions with the default settings

Source

pub fn with_skip_arrow_metadata(self, skip_arrow_metadata: bool) -> Self

Skip decoding the embedded arrow metadata (defaults to false)

Parquet files generated by some writers may contain embedded arrow schema and metadata. This may not be correct or compatible with your system, for example, see ARROW-16184

Source

pub fn with_schema(self, schema: SchemaRef) -> Self

Provide a schema hint to use when reading the Parquet file.

If provided, this schema takes precedence over any arrow schema embedded in the metadata (see the arrow documentation for more details).

If the provided schema is not compatible with the data stored in the parquet file schema, an error will be returned when constructing the builder.

This option is only required if you want to explicitly control the conversion of Parquet types to Arrow types, such as casting a column to a different type. For example, if you wanted to read an Int64 in a Parquet file to a TimestampMicrosecondArray in the Arrow schema.

§Notes

The provided schema must have the same number of columns as the parquet schema and the column names must be the same.

§Example
use std::io::Bytes;
use std::sync::Arc;
use tempfile::tempfile;
use arrow_array::{ArrayRef, Int32Array, RecordBatch};
use arrow_schema::{DataType, Field, Schema, TimeUnit};
use parquet::arrow::arrow_reader::{ArrowReaderOptions, ParquetRecordBatchReaderBuilder};
use parquet::arrow::ArrowWriter;

// Write data - schema is inferred from the data to be Int32
let file = tempfile().unwrap();
let batch = RecordBatch::try_from_iter(vec![
    ("col_1", Arc::new(Int32Array::from(vec![1, 2, 3])) as ArrayRef),
]).unwrap();
let mut writer = ArrowWriter::try_new(file.try_clone().unwrap(), batch.schema(), None).unwrap();
writer.write(&batch).unwrap();
writer.close().unwrap();

// Read the file back.
// Supply a schema that interprets the Int32 column as a Timestamp.
let supplied_schema = Arc::new(Schema::new(vec![
    Field::new("col_1", DataType::Timestamp(TimeUnit::Nanosecond, None), false)
]));
let options = ArrowReaderOptions::new().with_schema(supplied_schema.clone());
let mut builder = ParquetRecordBatchReaderBuilder::try_new_with_options(
    file.try_clone().unwrap(),
    options
).expect("Error if the schema is not compatible with the parquet file schema.");

// Create the reader and read the data using the supplied schema.
let mut reader = builder.build().unwrap();
let _batch = reader.next().unwrap().unwrap();
Source

pub fn with_page_index(self, page_index: bool) -> Self

Enable reading the PageIndex from the metadata, if present (defaults to false)

The PageIndex can be used to push down predicates to the parquet scan, potentially eliminating unnecessary IO, by some query engines.

If this is enabled, ParquetMetaData::column_index and ParquetMetaData::offset_index will be populated if the corresponding information is present in the file.

Source

pub fn with_page_index_policy(self, policy: PageIndexPolicy) -> Self

Set the PageIndexPolicy to determine how page indexes should be read.

See Self::with_page_index for more details.

Source

pub fn with_parquet_schema(self, schema: Arc<SchemaDescriptor>) -> Self

Provide a Parquet schema to use when decoding the metadata. The schema in the Parquet footer will be skipped.

This can be used to avoid reparsing the schema from the file when it is already known.

Source

pub fn with_file_decryption_properties( self, file_decryption_properties: Arc<FileDecryptionProperties>, ) -> Self

Provide the file decryption properties to use when reading encrypted parquet files.

If encryption is enabled and the file is encrypted, the file_decryption_properties must be provided.

Source

pub fn with_virtual_columns( self, virtual_columns: Vec<FieldRef>, ) -> Result<Self>

Include virtual columns in the output.

Virtual columns are columns that are not part of the Parquet schema, but are added to the output by the reader such as row numbers.

§Example
// Create a simple record batch with some data
let values = Arc::new(Int64Array::from(vec![1, 2, 3])) as ArrayRef;
let batch = RecordBatch::try_from_iter(vec![("value", values)])?;

// Write the batch to a temporary parquet file
let file = tempfile()?;
let mut writer = ArrowWriter::try_new(
    file.try_clone()?,
    batch.schema(),
    None
)?;
writer.write(&batch)?;
writer.close()?;

// Create a virtual column for row numbers
let row_number_field = Arc::new(Field::new("row_number", DataType::Int64, false)
    .with_extension_type(RowNumber));

// Configure options with virtual columns
let options = ArrowReaderOptions::new()
    .with_virtual_columns(vec![row_number_field])?;

// Create a reader with the options
let mut reader = ParquetRecordBatchReaderBuilder::try_new_with_options(
    file,
    options
)?
.build()?;

// Read the batch - it will include both the original column and the virtual row_number column
let result_batch = reader.next().unwrap()?;
assert_eq!(result_batch.num_columns(), 2); // "value" + "row_number"
assert_eq!(result_batch.num_rows(), 3);
Source

pub fn page_index(&self) -> bool

Retrieve the currently set page index behavior.

This can be set via with_page_index.

Source

pub fn metadata_options(&self) -> &ParquetMetaDataOptions

Retrieve the currently set metadata decoding options.

Source

pub fn file_decryption_properties( &self, ) -> Option<&Arc<FileDecryptionProperties>>

Retrieve the currently set file decryption properties.

This can be set via file_decryption_properties.

Trait Implementations§

Source§

impl Clone for ArrowReaderOptions

Source§

fn clone(&self) -> ArrowReaderOptions

Returns a duplicate of the value. Read more
1.0.0 · Source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
Source§

impl Debug for ArrowReaderOptions

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
Source§

impl Default for ArrowReaderOptions

Source§

fn default() -> ArrowReaderOptions

Returns the “default value” for a type. Read more

Auto Trait Implementations§

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> CloneToUninit for T
where T: Clone,

Source§

unsafe fn clone_to_uninit(&self, dest: *mut u8)

🔬This is a nightly-only experimental API. (clone_to_uninit)
Performs copy-assignment from self to dest. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> IntoEither for T

Source§

fn into_either(self, into_left: bool) -> Either<Self, Self>

Converts self into a Left variant of Either<Self, Self> if into_left is true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
where F: FnOnce(&Self) -> bool,

Converts self into a Left variant of Either<Self, Self> if into_left(&self) returns true. Converts self into a Right variant of Either<Self, Self> otherwise. Read more
Source§

impl<T> ToOwned for T
where T: Clone,

Source§

type Owned = T

The resulting type after obtaining ownership.
Source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
Source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.
§

impl<V, T> VZip<V> for T
where V: MultiLane<T>,

§

fn vzip(self) -> V

§

impl<T> Ungil for T
where T: Send,