parquet/
lib.rs

1// Licensed to the Apache Software Foundation (ASF) under one
2// or more contributor license agreements.  See the NOTICE file
3// distributed with this work for additional information
4// regarding copyright ownership.  The ASF licenses this file
5// to you under the Apache License, Version 2.0 (the
6// "License"); you may not use this file except in compliance
7// with the License.  You may obtain a copy of the License at
8//
9//   http://www.apache.org/licenses/LICENSE-2.0
10//
11// Unless required by applicable law or agreed to in writing,
12// software distributed under the License is distributed on an
13// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
14// KIND, either express or implied.  See the License for the
15// specific language governing permissions and limitations
16// under the License.
17
18//!
19//! This crate contains the official Native Rust implementation of
20//! [Apache Parquet](https://parquet.apache.org/), part of
21//! the [Apache Arrow](https://arrow.apache.org/) project.
22//! The crate provides a number of APIs to read and write Parquet files,
23//! covering a range of use cases.
24//!
25//! Please see the [parquet crates.io](https://crates.io/crates/parquet)
26//! page for feature flags and tips to improve performance.
27//!
28//! # Format Overview
29//!
30//! Parquet is a columnar format, which means that unlike row formats like [CSV], values are
31//! iterated along columns instead of rows. Parquet is similar in spirit to [Arrow], but
32//! focuses on storage efficiency whereas Arrow prioritizes compute efficiency.
33//!
34//! Parquet files are partitioned for scalability. Each file contains metadata,
35//! along with zero or more "row groups", each row group containing one or
36//! more columns. The APIs in this crate reflect this structure.
37//!
38//! Data in Parquet files is strongly typed and differentiates between logical
39//! and physical types (see [`schema`]). In addition, Parquet files may contain
40//! other metadata, such as statistics, which can be used to optimize reading
41//! (see [`file::metadata`]).
42//! For more details about the Parquet format itself, see the [Parquet spec]
43//!
44//! [Parquet spec]: https://github.com/apache/parquet-format/blob/master/README.md#file-format
45//!
46//! # APIs
47//!
48//! This crate exposes a number of APIs for different use-cases.
49//!
50//! ## Metadata and Schema
51//!
52//! The [`schema`] module provides APIs to work with Parquet schemas. The
53//! [`file::metadata`] module provides APIs to work with Parquet metadata.
54//!
55//! ## Reading and Writing Arrow (`arrow` feature)
56//!
57//! The [`arrow`] module supports reading and writing Parquet data to/from
58//! Arrow `RecordBatch`es. Using Arrow is simple and performant, and allows workloads
59//! to leverage the wide range of data transforms provided by the [arrow] crate, and by the
60//! ecosystem of [Arrow] compatible systems.
61//!
62//! Most users will use [`ArrowWriter`] for writing and [`ParquetRecordBatchReaderBuilder`] for
63//! reading.
64//!
65//! Lower level APIs include [`ArrowColumnWriter`] for writing using multiple
66//! threads, and [`RowFilter`] to apply filters during decode.
67//!
68//! [`ArrowWriter`]: arrow::arrow_writer::ArrowWriter
69//! [`ParquetRecordBatchReaderBuilder`]: arrow::arrow_reader::ParquetRecordBatchReaderBuilder
70//! [`ArrowColumnWriter`]: arrow::arrow_writer::ArrowColumnWriter
71//! [`RowFilter`]: arrow::arrow_reader::RowFilter
72//!
73//! ## `async` Reading and Writing Arrow (`async` feature)
74//!
75//! The [`async_reader`] and [`async_writer`] modules provide async APIs to
76//! read and write `RecordBatch`es  asynchronously.
77//!
78//! Most users will use [`AsyncArrowWriter`] for writing and [`ParquetRecordBatchStreamBuilder`]
79//! for reading. When the `object_store` feature is enabled, [`ParquetObjectReader`]
80//! provides efficient integration with object storage services such as S3 via the [object_store]
81//! crate, automatically optimizing IO based on any predicates or projections provided.
82//!
83//! [`async_reader`]: arrow::async_reader
84//! [`async_writer`]: arrow::async_writer
85//! [`AsyncArrowWriter`]: arrow::async_writer::AsyncArrowWriter
86//! [`ParquetRecordBatchStreamBuilder`]: arrow::async_reader::ParquetRecordBatchStreamBuilder
87//! [`ParquetObjectReader`]: arrow::async_reader::ParquetObjectReader
88//!
89//! ## Variant Logical Type (`variant_experimental` feature)
90//!
91//! The [`variant`] module supports reading and writing Parquet files
92//! with the [Variant Binary Encoding] logical type, which can represent
93//! semi-structured data such as JSON efficiently.
94//!
95//! [Variant Binary Encoding]: https://github.com/apache/parquet-format/blob/master/VariantEncoding.md
96//!
97//! ## Read/Write Parquet Directly
98//!
99//! Workloads needing finer-grained control, or to avoid a dependence on arrow,
100//! can use the APIs in [`mod@file`] directly. These APIs  are harder to use
101//! as they directly use the underlying Parquet data model, and require knowledge
102//! of the Parquet format, including the details of [Dremel] record shredding
103//! and [Logical Types].
104//!
105//! [arrow]: https://docs.rs/arrow/latest/arrow/index.html
106//! [Arrow]: https://arrow.apache.org/
107//! [CSV]: https://en.wikipedia.org/wiki/Comma-separated_values
108//! [Dremel]: https://research.google/pubs/pub36632/
109//! [Logical Types]: https://github.com/apache/parquet-format/blob/master/LogicalTypes.md
110//! [object_store]: https://docs.rs/object_store/latest/object_store/
111
112#![doc(
113    html_logo_url = "https://raw.githubusercontent.com/apache/parquet-format/25f05e73d8cd7f5c83532ce51cb4f4de8ba5f2a2/logo/parquet-logos_1.svg",
114    html_favicon_url = "https://raw.githubusercontent.com/apache/parquet-format/25f05e73d8cd7f5c83532ce51cb4f4de8ba5f2a2/logo/parquet-logos_1.svg"
115)]
116#![cfg_attr(docsrs, feature(doc_cfg))]
117#![warn(missing_docs)]
118/// Defines a an item with an experimental public API
119///
120/// The module will not be documented, and will only be public if the
121/// experimental feature flag is enabled
122///
123/// Experimental components have no stability guarantees
124#[cfg(feature = "experimental")]
125macro_rules! experimental {
126    ($(#[$meta:meta])* $vis:vis mod $module:ident) => {
127        #[doc(hidden)]
128        $(#[$meta])*
129        pub mod $module;
130    }
131}
132
133#[cfg(not(feature = "experimental"))]
134macro_rules! experimental {
135    ($(#[$meta:meta])* $vis:vis mod $module:ident) => {
136        $(#[$meta])*
137        $vis mod $module;
138    }
139}
140
141#[cfg(all(
142    feature = "flate2",
143    not(any(feature = "flate2-zlib-rs", feature = "flate2-rust_backened"))
144))]
145compile_error!(
146    "When enabling `flate2` you must enable one of the features: `flate2-zlib-rs` or `flate2-rust_backened`."
147);
148
149#[macro_use]
150pub mod errors;
151pub mod basic;
152
153/// Automatically generated code from the Parquet thrift definition.
154///
155/// This module code generated from [parquet.thrift]. See [crate::file] for
156/// more information on reading Parquet encoded data.
157///
158/// [parquet.thrift]: https://github.com/apache/parquet-format/blob/master/src/main/thrift/parquet.thrift
159// see parquet/CONTRIBUTING.md for instructions on regenerating
160// Don't try clippy and format auto generated code
161#[allow(clippy::all, missing_docs)]
162#[rustfmt::skip]
163#[deprecated(
164    since = "57.0.0",
165    note = "The `format` module is no longer maintained, and will be removed in `59.0.0`"
166)]
167pub mod format;
168
169#[macro_use]
170pub mod data_type;
171
172use std::fmt::Debug;
173use std::ops::Range;
174// Exported for external use, such as benchmarks
175#[cfg(feature = "experimental")]
176#[doc(hidden)]
177pub use self::encodings::{decoding, encoding};
178
179experimental!(#[macro_use] mod util);
180
181pub use util::utf8;
182
183#[cfg(feature = "arrow")]
184pub mod arrow;
185pub mod column;
186experimental!(mod compression);
187experimental!(mod encodings);
188pub mod bloom_filter;
189
190#[cfg(feature = "encryption")]
191experimental!(pub mod encryption);
192
193pub mod file;
194pub mod record;
195pub mod schema;
196
197mod parquet_macros;
198mod parquet_thrift;
199pub mod thrift;
200/// What data is needed to read the next item from a decoder.
201///
202/// This is used to communicate between the decoder and the caller
203/// to indicate what data is needed next, or what the result of decoding is.
204#[derive(Debug)]
205pub enum DecodeResult<T: Debug> {
206    /// The ranges of data necessary to proceed
207    // TODO: distinguish between minimim needed to make progress and what could be used?
208    NeedsData(Vec<Range<u64>>),
209    /// The decoder produced an output item
210    Data(T),
211    /// The decoder finished processing
212    Finished,
213}
214
215#[cfg(feature = "variant_experimental")]
216pub mod variant;
217experimental!(pub mod geospatial);