parquet/
lib.rs

1// Licensed to the Apache Software Foundation (ASF) under one
2// or more contributor license agreements.  See the NOTICE file
3// distributed with this work for additional information
4// regarding copyright ownership.  The ASF licenses this file
5// to you under the Apache License, Version 2.0 (the
6// "License"); you may not use this file except in compliance
7// with the License.  You may obtain a copy of the License at
8//
9//   http://www.apache.org/licenses/LICENSE-2.0
10//
11// Unless required by applicable law or agreed to in writing,
12// software distributed under the License is distributed on an
13// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
14// KIND, either express or implied.  See the License for the
15// specific language governing permissions and limitations
16// under the License.
17
18//!
19//! This crate contains the official Native Rust implementation of
20//! [Apache Parquet](https://parquet.apache.org/), part of
21//! the [Apache Arrow](https://arrow.apache.org/) project.
22//! The crate provides a number of APIs to read and write Parquet files,
23//! covering a range of use cases.
24//!
25//! Please see the [parquet crates.io](https://crates.io/crates/parquet)
26//! page for feature flags and tips to improve performance.
27//!
28//! # Format Overview
29//!
30//! Parquet is a columnar format, which means that unlike row formats like [CSV], values are
31//! iterated along columns instead of rows. Parquet is similar in spirit to [Arrow], but
32//! focuses on storage efficiency whereas Arrow prioritizes compute efficiency.
33//!
34//! Parquet files are partitioned for scalability. Each file contains metadata,
35//! along with zero or more "row groups", each row group containing one or
36//! more columns. The APIs in this crate reflect this structure.
37//!
38//! Data in Parquet files is strongly typed and differentiates between logical
39//! and physical types (see [`schema`]). In addition, Parquet files may contain
40//! other metadata, such as statistics, which can be used to optimize reading
41//! (see [`file::metadata`]).
42//! For more details about the Parquet format itself, see the [Parquet spec]
43//!
44//! [Parquet spec]: https://github.com/apache/parquet-format/blob/master/README.md#file-format
45//!
46//! # APIs
47//!
48//! This crate exposes a number of APIs for different use-cases.
49//!
50//! ## Metadata and Schema
51//!
52//! The [`schema`] module provides APIs to work with Parquet schemas. The
53//! [`file::metadata`] module provides APIs to work with Parquet metadata.
54//!
55//! ## Reading and Writing Arrow (`arrow` feature)
56//!
57//! The [`arrow`] module supports reading and writing Parquet data to/from
58//! Arrow [`RecordBatch`]es. Using Arrow is simple and performant, and allows workloads
59//! to leverage the wide range of data transforms provided by the [arrow] crate, and by the
60//! ecosystem of [Arrow] compatible systems.
61//!
62//! Most users will use [`ArrowWriter`] for writing and [`ParquetRecordBatchReaderBuilder`] for
63//! reading from synchronous IO sources such as files or in-memory buffers.
64//!
65//! Lower level APIs include
66//! * [`ParquetPushDecoder`] for file grained control over interleaving of IO and CPU.
67//! * [`ArrowColumnWriter`] for writing using multiple threads,
68//! * [`RowFilter`] to apply filters during decode
69//!
70//! [`ArrowWriter`]: arrow::arrow_writer::ArrowWriter
71//! [`ParquetRecordBatchReaderBuilder`]: arrow::arrow_reader::ParquetRecordBatchReaderBuilder
72//! [`ParquetPushDecoder`]: arrow::push_decoder::ParquetPushDecoder
73//! [`ArrowColumnWriter`]: arrow::arrow_writer::ArrowColumnWriter
74//! [`RowFilter`]: arrow::arrow_reader::RowFilter
75//!
76//! ## `async` Reading and Writing Arrow (`arrow` feature + `async` feature)
77//!
78//! The [`async_reader`] and [`async_writer`] modules provide async APIs to
79//! read and write [`RecordBatch`]es  asynchronously.
80//!
81//! Most users will use [`AsyncArrowWriter`] for writing and [`ParquetRecordBatchStreamBuilder`]
82//! for reading. When the `object_store` feature is enabled, [`ParquetObjectReader`]
83//! provides efficient integration with object storage services such as S3 via the [object_store]
84//! crate, automatically optimizing IO based on any predicates or projections provided.
85//!
86//! [`async_reader`]: arrow::async_reader
87//! [`async_writer`]: arrow::async_writer
88//! [`AsyncArrowWriter`]: arrow::async_writer::AsyncArrowWriter
89//! [`ParquetRecordBatchStreamBuilder`]: arrow::async_reader::ParquetRecordBatchStreamBuilder
90//! [`ParquetObjectReader`]: arrow::async_reader::ParquetObjectReader
91//!
92//! ## Variant Logical Type (`variant_experimental` feature)
93//!
94//! The [`variant`] module supports reading and writing Parquet files
95//! with the [Variant Binary Encoding] logical type, which can represent
96//! semi-structured data such as JSON efficiently.
97//!
98//! [Variant Binary Encoding]: https://github.com/apache/parquet-format/blob/master/VariantEncoding.md
99//!
100//! ## Read/Write Parquet Directly
101//!
102//! Workloads needing finer-grained control, or to avoid a dependence on arrow,
103//! can use the APIs in [`mod@file`] directly. These APIs  are harder to use
104//! as they directly use the underlying Parquet data model, and require knowledge
105//! of the Parquet format, including the details of [Dremel] record shredding
106//! and [Logical Types].
107//!
108//! [arrow]: https://docs.rs/arrow/latest/arrow/index.html
109//! [Arrow]: https://arrow.apache.org/
110//! [`RecordBatch`]: https://docs.rs/arrow/latest/arrow/array/struct.RecordBatch.html
111//! [CSV]: https://en.wikipedia.org/wiki/Comma-separated_values
112//! [Dremel]: https://research.google/pubs/pub36632/
113//! [Logical Types]: https://github.com/apache/parquet-format/blob/master/LogicalTypes.md
114//! [object_store]: https://docs.rs/object_store/latest/object_store/
115
116#![doc(
117    html_logo_url = "https://raw.githubusercontent.com/apache/parquet-format/25f05e73d8cd7f5c83532ce51cb4f4de8ba5f2a2/logo/parquet-logos_1.svg",
118    html_favicon_url = "https://raw.githubusercontent.com/apache/parquet-format/25f05e73d8cd7f5c83532ce51cb4f4de8ba5f2a2/logo/parquet-logos_1.svg"
119)]
120#![cfg_attr(docsrs, feature(doc_cfg))]
121#![warn(missing_docs)]
122/// Defines a an item with an experimental public API
123///
124/// The module will not be documented, and will only be public if the
125/// experimental feature flag is enabled
126///
127/// Experimental components have no stability guarantees
128#[cfg(feature = "experimental")]
129macro_rules! experimental {
130    ($(#[$meta:meta])* $vis:vis mod $module:ident) => {
131        #[doc(hidden)]
132        $(#[$meta])*
133        pub mod $module;
134    }
135}
136
137#[cfg(not(feature = "experimental"))]
138macro_rules! experimental {
139    ($(#[$meta:meta])* $vis:vis mod $module:ident) => {
140        $(#[$meta])*
141        $vis mod $module;
142    }
143}
144
145#[cfg(all(
146    feature = "flate2",
147    not(any(feature = "flate2-zlib-rs", feature = "flate2-rust_backened"))
148))]
149compile_error!(
150    "When enabling `flate2` you must enable one of the features: `flate2-zlib-rs` or `flate2-rust_backened`."
151);
152
153#[macro_use]
154pub mod errors;
155pub mod basic;
156
157/// Automatically generated code from the Parquet thrift definition.
158///
159/// This module code generated from [parquet.thrift]. See [crate::file] for
160/// more information on reading Parquet encoded data.
161///
162/// [parquet.thrift]: https://github.com/apache/parquet-format/blob/master/src/main/thrift/parquet.thrift
163// see parquet/CONTRIBUTING.md for instructions on regenerating
164// Don't try clippy and format auto generated code
165#[allow(clippy::all, missing_docs)]
166#[rustfmt::skip]
167#[deprecated(
168    since = "57.0.0",
169    note = "The `format` module is no longer maintained, and will be removed in `59.0.0`"
170)]
171pub mod format;
172
173#[macro_use]
174pub mod data_type;
175
176use std::fmt::Debug;
177use std::ops::Range;
178// Exported for external use, such as benchmarks
179#[cfg(feature = "experimental")]
180#[doc(hidden)]
181pub use self::encodings::{decoding, encoding};
182
183experimental!(#[macro_use] mod util);
184
185pub use util::utf8;
186
187#[cfg(feature = "arrow")]
188pub mod arrow;
189pub mod column;
190experimental!(mod compression);
191experimental!(mod encodings);
192pub mod bloom_filter;
193
194#[cfg(feature = "encryption")]
195experimental!(pub mod encryption);
196
197pub mod file;
198pub mod record;
199pub mod schema;
200
201mod parquet_macros;
202mod parquet_thrift;
203pub mod thrift;
204/// What data is needed to read the next item from a decoder.
205///
206/// This is used to communicate between the decoder and the caller
207/// to indicate what data is needed next, or what the result of decoding is.
208#[derive(Debug)]
209pub enum DecodeResult<T: Debug> {
210    /// The ranges of data necessary to proceed
211    // TODO: distinguish between minimim needed to make progress and what could be used?
212    NeedsData(Vec<Range<u64>>),
213    /// The decoder produced an output item
214    Data(T),
215    /// The decoder finished processing
216    Finished,
217}
218
219#[cfg(feature = "variant_experimental")]
220pub mod variant;
221experimental!(pub mod geospatial);