Compute Functions

Arrow supports logical compute operations over inputs of possibly varying types.

The standard compute operations are provided by the pyarrow.compute module and can be used directly:

>>> import pyarrow as pa
>>> import pyarrow.compute as pc
>>> a = pa.array([1, 1, 2, 3])
>>> pc.sum(a)
<pyarrow.Int64Scalar: 7>

The grouped aggregation functions raise an exception instead and need to be used through the pyarrow.Table.group_by() capabilities. See Grouped Aggregations for more details.

Standard Compute Functions

Many compute functions support both array (chunked or not) and scalar inputs, but some will mandate either. For example, sort_indices requires its first and only input to be an array.

Below are a few simple examples:

>>> import pyarrow as pa
>>> import pyarrow.compute as pc
>>> a = pa.array([1, 1, 2, 3])
>>> b = pa.array([4, 1, 2, 8])
>>> pc.equal(a, b)
<pyarrow.lib.BooleanArray object at 0x7f686e4eef30>
[
  false,
  true,
  true,
  false
]
>>> x, y = pa.scalar(7.8), pa.scalar(9.3)
>>> pc.multiply(x, y)
<pyarrow.DoubleScalar: 72.54>

These functions can do more than just element-by-element operations. Here is an example of sorting a table:

>>> import pyarrow as pa
>>> import pyarrow.compute as pc
>>> t = pa.table({'x':[1,2,3],'y':[3,2,1]})
>>> i = pc.sort_indices(t, sort_keys=[('y', 'ascending')])
>>> i
<pyarrow.lib.UInt64Array object at 0x7fcee5df75e8>
[
  2,
  1,
  0
]

For a complete list of the compute functions that PyArrow provides you can refer to Compute Functions reference.

Grouped Aggregations

PyArrow supports grouped aggregations over pyarrow.Table through the pyarrow.Table.group_by() method. The method will return a grouping declaration to which the hash aggregation functions can be applied:

>>> import pyarrow as pa
>>> t = pa.table([
...       pa.array(["a", "a", "b", "b", "c"]),
...       pa.array([1, 2, 3, 4, 5]),
... ], names=["keys", "values"])
>>> t.group_by("keys").aggregate([("values", "sum")])
pyarrow.Table
values_sum: int64
keys: string
----
values_sum: [[3,7,5]]
keys: [["a","b","c"]]

The "sum" aggregation passed to the aggregate method in the previous example is the hash_sum compute function.

Multiple aggregations can be performed at the same time by providing them to the aggregate method:

>>> import pyarrow as pa
>>> t = pa.table([
...       pa.array(["a", "a", "b", "b", "c"]),
...       pa.array([1, 2, 3, 4, 5]),
... ], names=["keys", "values"])
>>> t.group_by("keys").aggregate([
...    ("values", "sum"),
...    ("keys", "count")
... ])
pyarrow.Table
values_sum: int64
keys_count: int64
keys: string
----
values_sum: [[3,7,5]]
keys_count: [[2,2,1]]
keys: [["a","b","c"]]

Aggregation options can also be provided for each aggregation function, for example we can use CountOptions to change how we count null values:

>>> import pyarrow as pa
>>> import pyarrow.compute as pc
>>> table_with_nulls = pa.table([
...    pa.array(["a", "a", "a"]),
...    pa.array([1, None, None])
... ], names=["keys", "values"])
>>> table_with_nulls.group_by(["keys"]).aggregate([
...    ("values", "count", pc.CountOptions(mode="all"))
... ])
pyarrow.Table
values_count: int64
keys: string
----
values_count: [[3]]
keys: [["a"]]
>>> table_with_nulls.group_by(["keys"]).aggregate([
...    ("values", "count", pc.CountOptions(mode="only_valid"))
... ])
pyarrow.Table
values_count: int64
keys: string
----
values_count: [[1]]
keys: [["a"]]

Following is a list of all supported grouped aggregation functions. You can use them with or without the "hash_" prefix.

hash_all

Whether all elements in each group evaluate to true

ScalarAggregateOptions

hash_any

Whether any element in each group evaluates to true

ScalarAggregateOptions

hash_approximate_median

Compute approximate medians of values in each group

ScalarAggregateOptions

hash_count

Count the number of null / non-null values in each group

CountOptions

hash_count_distinct

Count the distinct values in each group

CountOptions

hash_distinct

Keep the distinct values in each group

CountOptions

hash_list

List all values in each group

hash_max

Compute the minimum or maximum of values in each group

ScalarAggregateOptions

hash_mean

Compute the mean of values in each group

ScalarAggregateOptions

hash_min

Compute the minimum or maximum of values in each group

ScalarAggregateOptions

hash_min_max

Compute the minimum and maximum of values in each group

ScalarAggregateOptions

hash_one

Get one value from each group

hash_product

Compute the product of values in each group

ScalarAggregateOptions

hash_stddev

Compute the standard deviation of values in each group

hash_sum

Sum values in each group

ScalarAggregateOptions

hash_tdigest

Compute approximate quantiles of values in each group

TDigestOptions

hash_variance

Compute the variance of values in each group

Table and Dataset Joins

Both Table and Dataset support join operations through Table.join() and Dataset.join() methods.

The methods accept a right table or dataset that will be joined to the initial one and one or more keys that should be used from the two entities to perform the join.

By default a left outer join is performed, but it’s possible to ask for any of the supported join types:

  • left semi

  • right semi

  • left anti

  • right anti

  • inner

  • left outer

  • right outer

  • full outer

A basic join can be performed just by providing a table and a key on which the join should be performed:

import pyarrow as pa

table1 = pa.table({'id': [1, 2, 3],
                   'year': [2020, 2022, 2019]})

table2 = pa.table({'id': [3, 4],
                   'n_legs': [5, 100],
                   'animal': ["Brittle stars", "Centipede"]})

joined_table = table1.join(table2, keys="id")

The result will be a new table created by joining table1 with table2 on the id key with a left outer join:

pyarrow.Table
id: int64
year: int64
n_legs: int64
animal: string
----
id: [[3,1,2]]
year: [[2019,2020,2022]]
n_legs: [[5,null,null]]
animal: [["Brittle stars",null,null]]

We can perform additional type of joins, like full outer join by passing them to the join_type argument:

table1.join(table2, keys='id', join_type="full outer")

In that case the result would be:

pyarrow.Table
id: int64
year: int64
n_legs: int64
animal: string
----
id: [[3,1,2,4]]
year: [[2019,2020,2022,null]]
n_legs: [[5,null,null,100]]
animal: [["Brittle stars",null,null,"Centipede"]]

It’s also possible to provide additional join keys, so that the join happens on two keys instead of one. For example we can add an year column to table2 so that we can join on ('id', 'year'):

table2_withyear = table2.append_column("year", pa.array([2019, 2022]))
table1.join(table2_withyear, keys=["id", "year"])

The result will be a table where only entries with id=3 and year=2019 have data, the rest will be null:

pyarrow.Table
id: int64
year: int64
animal: string
n_legs: int64
----
id: [[3,1,2]]
year: [[2019,2020,2022]]
animal: [["Brittle stars",null,null]]
n_legs: [[5,null,null]]

The same capabilities are available for Dataset.join() too, so you can take two datasets and join them:

import pyarrow.dataset as ds

ds1 = ds.dataset(table1)
ds2 = ds.dataset(table2)

joined_ds = ds1.join(ds2, key="id")

The resulting dataset will be an InMemoryDataset containing the joined data:

>>> joined_ds.head(5)

pyarrow.Table
id: int64
year: int64
animal: string
n_legs: int64
----
id: [[3,1,2]]
year: [[2019,2020,2022]]
animal: [["Brittle stars",null,null]]
n_legs: [[5,null,null]]

Filtering by Expressions

Table and Dataset can both be filtered using a boolean Expression.

The expression can be built starting from a pyarrow.compute.field(). Comparisons and transformations can then be applied to one or more fields to build the filter expression you care about.

Most Compute Functions can be used to perform transformations on a field.

For example we could build a filter to find all rows that are even in column "nums"

import pyarrow.compute as pc
even_filter = (pc.bit_wise_and(pc.field("nums"), pc.scalar(1)) == pc.scalar(0))

Note

The filter finds even numbers by performing a bitwise and operation between the number and 1. As 1 is to 00000001 in binary form, only numbers that have the last bit set to 1 will return a non-zero result from the bit_wise_and operation. This way we are identifying all odd numbers. Given that we are interested in the even ones, we then check that the number returned by the bit_wise_and operation equals 0. Only the numbers where the last bit was 0 will return a 0 as the result of num & 1 and as all numbers where the last bit is 0 are multiples of 2 we will be filtering for the even numbers only.

Once we have our filter, we can provide it to the Table.filter() method to filter our table only for the matching rows:

>>> table = pa.table({'nums': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
...                   'chars': ["a", "b", "c", "d", "e", "f", "g", "h", "i", "l"]})
>>> table.filter(even_filter)
pyarrow.Table
nums: int64
chars: string
----
nums: [[2,4,6,8,10]]
chars: [["b","d","f","h","l"]]

Multiple filters can be joined using &, |, ~ to perform and, or and not operations. For example using ~even_filter will actually end up filtering for all numbers that are odd:

>>> table.filter(~even_filter)
pyarrow.Table
nums: int64
chars: string
----
nums: [[1,3,5,7,9]]
chars: [["a","c","e","g","i"]]

and we could build a filter that finds all even numbers greater than 5 by combining our even_filter with a pc.field("nums") > 5 filter:

>>> table.filter(even_filter & (pc.field("nums") > 5))
pyarrow.Table
nums: int64
chars: string
----
nums: [[6,8,10]]
chars: [["f","h","l"]]

Dataset currently can be filtered using Dataset.to_table() method passing a filter argument. See Filtering data in Dataset documentation.