pyarrow.dataset.write_dataset

pyarrow.dataset.write_dataset(data, base_dir, basename_template=None, format=None, partitioning=None, partitioning_flavor=None, schema=None, filesystem=None, file_options=None, use_threads=True, max_partitions=None, file_visitor=None, existing_data_behavior='error')[source]

Write a dataset to a given format and partitioning.

Parameters
  • data (Dataset, Table/RecordBatch, RecordBatchReader, list of) – Table/RecordBatch, or iterable of RecordBatch The data to write. This can be a Dataset instance or in-memory Arrow data. If an iterable is given, the schema must also be given.

  • base_dir (str) – The root directory where to write the dataset.

  • basename_template (str, optional) – A template string used to generate basenames of written data files. The token ‘{i}’ will be replaced with an automatically incremented integer. If not specified, it defaults to “part-{i}.” + format.default_extname

  • format (FileFormat or str) – The format in which to write the dataset. Currently supported: “parquet”, “ipc”/”feather”. If a FileSystemDataset is being written and format is not specified, it defaults to the same format as the specified FileSystemDataset. When writing a Table or RecordBatch, this keyword is required.

  • partitioning (Partitioning or list[str], optional) – The partitioning scheme specified with the partitioning() function or a list of field names. When providing a list of field names, you can use partitioning_flavor to drive which partitioning type should be used.

  • partitioning_flavor (str, optional) – One of the partitioning flavors supported by pyarrow.dataset.partitioning. If omitted will use the default of partitioning() which is directory partitioning.

  • schema (Schema, optional) –

  • filesystem (FileSystem, optional) –

  • file_options (FileWriteOptions, optional) – FileFormat specific write options, created using the FileFormat.make_write_options() function.

  • use_threads (bool, default True) – Write files in parallel. If enabled, then maximum parallelism will be used determined by the number of available CPU cores.

  • max_partitions (int, default 1024) – Maximum number of partitions any batch may be written into.

  • file_visitor (Function) –

    If set, this function will be called with a WrittenFile instance for each file created during the call. This object will have both a path attribute and a metadata attribute.

    The path attribute will be a string containing the path to the created file.

    The metadata attribute will be the parquet metadata of the file. This metadata will have the file path attribute set and can be used to build a _metadata file. The metadata attribute will be None if the format is not parquet.

    Example visitor which simple collects the filenames created:

    visited_paths = []
    
    def file_visitor(written_file):
        visited_paths.append(written_file.path)
    

  • existing_data_behavior ('error' | 'overwrite_or_ignore' | 'delete_matching') –

    Controls how the dataset will handle data that already exists in the destination. The default behavior (‘error’) is to raise an error if any data exists in the destination.

    ’overwrite_or_ignore’ will ignore any existing data and will overwrite files with the same name as an output file. Other existing files will be ignored. This behavior, in combination with a unique basename_template for each write, will allow for an append workflow.

    ’delete_matching’ is useful when you are writing a partitioned dataset. The first time each partition directory is encountered the entire directory will be deleted. This allows you to overwrite old partitions completely.