pyarrow.dataset.write_dataset

pyarrow.dataset.write_dataset(data, base_dir, basename_template=None, format=None, partitioning=None, partitioning_flavor=None, schema=None, filesystem=None, file_options=None, use_threads=True, max_partitions=None, max_open_files=None, max_rows_per_file=None, min_rows_per_group=None, max_rows_per_group=None, file_visitor=None, existing_data_behavior='error')[source]

Write a dataset to a given format and partitioning.

Parameters
dataDataset, Table/RecordBatch, RecordBatchReader, list of Table/RecordBatch, or iterable of RecordBatch

The data to write. This can be a Dataset instance or in-memory Arrow data. If an iterable is given, the schema must also be given.

base_dirstr

The root directory where to write the dataset.

basename_templatestr, optional

A template string used to generate basenames of written data files. The token ‘{i}’ will be replaced with an automatically incremented integer. If not specified, it defaults to “part-{i}.” + format.default_extname

formatFileFormat or str

The format in which to write the dataset. Currently supported: “parquet”, “ipc”/”arrow”/”feather”, and “csv”. If a FileSystemDataset is being written and format is not specified, it defaults to the same format as the specified FileSystemDataset. When writing a Table or RecordBatch, this keyword is required.

partitioningPartitioning or list[str], optional

The partitioning scheme specified with the partitioning() function or a list of field names. When providing a list of field names, you can use partitioning_flavor to drive which partitioning type should be used.

partitioning_flavorstr, optional

One of the partitioning flavors supported by pyarrow.dataset.partitioning. If omitted will use the default of partitioning() which is directory partitioning.

schemaSchema, optional
filesystemFileSystem, optional
file_optionspyarrow.dataset.FileWriteOptions, optional

FileFormat specific write options, created using the FileFormat.make_write_options() function.

use_threadsbool, default True

Write files in parallel. If enabled, then maximum parallelism will be used determined by the number of available CPU cores.

max_partitionsint, default 1024

Maximum number of partitions any batch may be written into.

max_open_filesint, default 1024

If greater than 0 then this will limit the maximum number of files that can be left open. If an attempt is made to open too many files then the least recently used file will be closed. If this setting is set too low you may end up fragmenting your data into many small files.

max_rows_per_fileint, default 0

Maximum number of rows per file. If greater than 0 then this will limit how many rows are placed in any single file. Otherwise there will be no limit and one file will be created in each output directory unless files need to be closed to respect max_open_files

min_rows_per_groupint, default 0

Minimum number of rows per group. When the value is greater than 0, the dataset writer will batch incoming data and only write the row groups to the disk when sufficient rows have accumulated.

max_rows_per_groupint, default 1024 * 1024

Maximum number of rows per group. If the value is greater than 0, then the dataset writer may split up large incoming batches into multiple row groups. If this value is set, then min_rows_per_group should also be set. Otherwise it could end up with very small row groups.

file_visitorfunction

If set, this function will be called with a WrittenFile instance for each file created during the call. This object will have both a path attribute and a metadata attribute.

The path attribute will be a string containing the path to the created file.

The metadata attribute will be the parquet metadata of the file. This metadata will have the file path attribute set and can be used to build a _metadata file. The metadata attribute will be None if the format is not parquet.

Example visitor which simple collects the filenames created:

visited_paths = []

def file_visitor(written_file):
    visited_paths.append(written_file.path)
existing_data_behavior‘error’ | ‘overwrite_or_ignore’ | ‘delete_matching’

Controls how the dataset will handle data that already exists in the destination. The default behavior (‘error’) is to raise an error if any data exists in the destination.

‘overwrite_or_ignore’ will ignore any existing data and will overwrite files with the same name as an output file. Other existing files will be ignored. This behavior, in combination with a unique basename_template for each write, will allow for an append workflow.

‘delete_matching’ is useful when you are writing a partitioned dataset. The first time each partition directory is encountered the entire directory will be deleted. This allows you to overwrite old partitions completely.