pyarrow.cuda.Context¶
-
class
pyarrow.cuda.
Context
(*args, **kwargs)¶ Bases:
pyarrow.lib._Weakrefable
CUDA driver context.
-
__init__
()¶ Create a CUDA driver context for a particular device.
If a CUDA context handle is passed, it is wrapped, otherwise a default CUDA context for the given device is requested.
- Parameters
device_number (int (default 0)) – Specify the GPU device for which the CUDA driver context is requested.
handle (int, optional) – Specify CUDA handle for a shared context that has been created by another library.
Methods
Create a CUDA driver context for a particular device.
buffer_from_data
(self, data, …)Create device buffer and initialize with data.
buffer_from_object
(self, obj)Create device buffer view of arbitrary object that references device accessible memory.
foreign_buffer
(self, address, size[, base])Create device buffer from address and size as a view.
from_numba
([context])Create a Context instance from a Numba CUDA context.
get_device_address
(self, uintptr_t address)Return the device address that is reachable from kernels running in the context
Return the number of GPU devices.
new_buffer
(self, int64_t nbytes)Return new device buffer.
open_ipc_buffer
(self, ipc_handle)Open existing CUDA IPC memory handle
synchronize
(self)Blocks until the device has completed all preceding requested tasks.
to_numba
(self)Convert Context to a Numba CUDA context.
Attributes
Return the number of allocated bytes.
Return context device number.
Return pointer to context handle.
-
buffer_from_data
(self, data, int64_t offset=0, int64_t size=-1)¶ Create device buffer and initialize with data.
- Parameters
data ({CudaBuffer, HostBuffer, Buffer, array-like}) – Specify data to be copied to device buffer.
offset (int) – Specify the offset of input buffer for device data buffering. Default: 0.
size (int) – Specify the size of device buffer in bytes. Default: all (starting from input offset)
- Returns
cbuf (CudaBuffer) – Device buffer with copied data.
-
buffer_from_object
(self, obj)¶ Create device buffer view of arbitrary object that references device accessible memory.
When the object contains a non-contiguous view of device accessible memory then the returned device buffer will contain contiguous view of the memory, that is, including the intermediate data that is otherwise invisible to the input object.
- Parameters
obj ({object, Buffer, HostBuffer, CudaBuffer, ..}) – Specify an object that holds (device or host) address that can be accessed from device. This includes objects with types defined in pyarrow.cuda as well as arbitrary objects that implement the CUDA array interface as defined by numba.
- Returns
cbuf (CudaBuffer) – Device buffer as a view of device accessible memory.
-
bytes_allocated
¶ Return the number of allocated bytes.
-
device_number
¶ Return context device number.
-
foreign_buffer
(self, address, size, base=None)¶ Create device buffer from address and size as a view.
The caller is responsible for allocating and freeing the memory. When address==size==0 then a new zero-sized buffer is returned.
- Parameters
address (int) – Specify the starting address of the buffer. The address can refer to both device or host memory but it must be accessible from device after mapping it with get_device_address method.
size (int) – Specify the size of device buffer in bytes.
base ({None, object}) – Specify object that owns the referenced memory.
- Returns
cbuf (CudaBuffer) – Device buffer as a view of device reachable memory.
-
static
from_numba
(context=None)¶ Create a Context instance from a Numba CUDA context.
- Parameters
context ({numba.cuda.cudadrv.driver.Context, None}) – A Numba CUDA context instance. If None, the current Numba context is used.
- Returns
shared_context (pyarrow.cuda.Context) – Context instance.
-
get_device_address
(self, uintptr_t address)¶ Return the device address that is reachable from kernels running in the context
- Parameters
address (int) – Specify memory address value
- Returns
device_address (int) – Device address accessible from device context
Notes
The device address is defined as a memory address accessible by device. While it is often a device memory address but it can be also a host memory address, for instance, when the memory is allocated as host memory (using cudaMallocHost or cudaHostAlloc) or as managed memory (using cudaMallocManaged) or the host memory is page-locked (using cudaHostRegister).
-
static
get_num_devices
()¶ Return the number of GPU devices.
-
handle
¶ Return pointer to context handle.
-
new_buffer
(self, int64_t nbytes)¶ Return new device buffer.
- Parameters
nbytes (int) – Specify the number of bytes to be allocated.
- Returns
buf (CudaBuffer) – Allocated buffer.
-
open_ipc_buffer
(self, ipc_handle)¶ Open existing CUDA IPC memory handle
- Parameters
ipc_handle (IpcMemHandle) – Specify opaque pointer to CUipcMemHandle (driver API).
- Returns
buf (CudaBuffer) – referencing device buffer
-
synchronize
(self)¶ Blocks until the device has completed all preceding requested tasks.
-
to_numba
(self)¶ Convert Context to a Numba CUDA context.
- Returns
context (numba.cuda.cudadrv.driver.Context) – Numba CUDA context instance.
-