pyarrow.dataset.write_dataset(data, base_dir, basename_template=None, format=None, partitioning=None, schema=None, filesystem=None, file_options=None, use_threads=True, use_async=False, max_partitions=None, file_visitor=None)[source]

Write a dataset to a given format and partitioning.

  • data (Dataset, Table/RecordBatch, RecordBatchReader, list of) – Table/RecordBatch, or iterable of RecordBatch The data to write. This can be a Dataset instance or in-memory Arrow data. If an iterable is given, the schema must also be given.

  • base_dir (str) – The root directory where to write the dataset.

  • basename_template (str, optional) – A template string used to generate basenames of written data files. The token ‘{i}’ will be replaced with an automatically incremented integer. If not specified, it defaults to “part-{i}.” + format.default_extname

  • format (FileFormat or str) – The format in which to write the dataset. Currently supported: “parquet”, “ipc”/”feather”. If a FileSystemDataset is being written and format is not specified, it defaults to the same format as the specified FileSystemDataset. When writing a Table or RecordBatch, this keyword is required.

  • partitioning (Partitioning, optional) – The partitioning scheme specified with the partitioning() function.

  • schema (Schema, optional) –

  • filesystem (FileSystem, optional) –

  • file_options (FileWriteOptions, optional) – FileFormat specific write options, created using the FileFormat.make_write_options() function.

  • use_threads (bool, default True) – Write files in parallel. If enabled, then maximum parallelism will be used determined by the number of available CPU cores.

  • use_async (bool, default False) – If enabled, an async scanner will be used that should offer better performance with high-latency/highly-parallel filesystems (e.g. S3)

  • max_partitions (int, default 1024) – Maximum number of partitions any batch may be written into.

  • file_visitor (Function) –

    If set, this function will be called with a WrittenFile instance for each file created during the call. This object will have both a path attribute and a metadata attribute.

    The path attribute will be a string containing the path to the created file.

    The metadata attribute will be the parquet metadata of the file. This metadata will have the file path attribute set and can be used to build a _metadata file. The metadata attribute will be None if the format is not parquet.

    Example visitor which simple collects the filenames created:

    visited_paths = []
    def file_visitor(written_file):