Apache Arrow 0.6.0 Release


Published 16 Aug 2017
By Wes McKinney (wesm)

The Apache Arrow team is pleased to announce the 0.6.0 release. It includes 90 resolved JIRAs with the new Plasma shared memory object store, and improvements and bug fixes to the various language implementations. The Arrow memory format remains stable since the 0.3.x release.

See the Install Page to learn how to get the libraries for your platform. The complete changelog is also available.

Plasma Shared Memory Object Store

This release includes the Plasma Store, which you can read more about in the linked blog post. This system was originally developed as part of the Ray Project at the UC Berkeley RISELab. We recognized that Plasma would be highly valuable to the Arrow community as a tool for shared memory management and zero-copy deserialization. Additionally, we believe we will be able to develop a stronger software stack through sharing of IO and buffer management code.

The Plasma store is a server application which runs as a separate process. A reference C++ client, with Python bindings, is made available in this release. Clients can be developed in Java or other languages in the future to enable simple sharing of complex datasets through shared memory.

Arrow Format Addition: Map type

We added a Map logical type to represent ordered and unordered maps in-memory. This corresponds to the MAP logical type annotation in the Parquet format (where maps are represented as repeated structs).

Map is represented as a list of structs. It is the first example of a logical type whose physical representation is a nested type. We have not yet created implementations of Map containers in any of the implementations, but this can be done in a future release.

As an example, the Python data:

data = [{'a': 1, 'bb': 2, 'cc': 3}, {'dddd': 4}]

Could be represented in an Arrow Map<String, Int32> as:

Map<String, Int32> = List<Struct<keys: String, values: Int32>>
  is_valid: [true, true]
  offsets: [0, 3, 4]
  values: Struct<keys: String, values: Int32>
    children:
      - keys: String
          is_valid: [true, true, true, true]
          offsets: [0, 1, 3, 5, 9]
          data: abbccdddd
      - values: Int32
          is_valid: [true, true, true, true]
          data: [1, 2, 3, 4]

Python Changes

Some highlights of Python development outside of bug fixes and general API improvements include:

  • New strings_to_categorical=True option when calling Table.to_pandas will yield pandas Categorical types from Arrow binary and string columns
  • Expanded Hadoop Filesystem (HDFS) functionality to improve compatibility with Dask and other HDFS-aware Python libraries.
  • s3fs and other Dask-oriented filesystems can now be used with pyarrow.parquet.ParquetDataset
  • More graceful handling of pandas’s nanosecond timestamps when writing to Parquet format. You can now pass coerce_timestamps='ms' to cast to milliseconds, or 'us' for microseconds.

Toward Arrow 1.0.0 and Beyond

We are still discussing the roadmap to 1.0.0 release on the developer mailing list. The focus of the 1.0.0 release will likely be memory format stability and hardening integration tests across the remaining data types implemented in Java and C++. Please join the discussion there.