Apache Arrow (C++)
A columnar in-memory analytics layer designed to accelerate big data.
Classes | Functions
arrow::gpu Namespace Reference


class  CudaBuffer
 An Arrow buffer located on a GPU device. More...
class  CudaBufferReader
 File interface for zero-copy read from CUDA buffers. More...
class  CudaBufferWriter
 File interface for writing to CUDA buffers, with optional buffering. More...
class  CudaContext
 Friendlier interface to the CUDA driver API. More...
struct  CudaDeviceInfo
class  CudaDeviceManager
class  CudaHostBuffer
 Device-accessible CPU memory created using cudaHostAlloc. More...
class  CudaIpcMemHandle


Status SerializeRecordBatch (const RecordBatch &batch, CudaContext *ctx, std::shared_ptr< CudaBuffer > *out)
 Write record batch message to GPU device memory. More...
Status ReadMessage (CudaBufferReader *reader, MemoryPool *pool, std::unique_ptr< ipc::Message > *message)
 Read Arrow IPC message located on GPU device. More...
Status ReadRecordBatch (const std::shared_ptr< Schema > &schema, const std::shared_ptr< CudaBuffer > &buffer, MemoryPool *pool, std::shared_ptr< RecordBatch > *out)
 ReadRecordBatch specialized to handle metadata on CUDA device. More...
Status AllocateCudaHostBuffer (const int64_t size, std::shared_ptr< CudaHostBuffer > *out)
 Allocate CUDA-accessible memory on CPU host. More...

Function Documentation

◆ AllocateCudaHostBuffer()

Status arrow::gpu::AllocateCudaHostBuffer ( const int64_t  size,
std::shared_ptr< CudaHostBuffer > *  out 

Allocate CUDA-accessible memory on CPU host.

[in]sizenumber of bytes
[out]outthe allocated buffer

◆ ReadMessage()

Status arrow::gpu::ReadMessage ( CudaBufferReader reader,
MemoryPool pool,
std::unique_ptr< ipc::Message > *  message 

Read Arrow IPC message located on GPU device.

[in]readera CudaBufferReader
[in]poola MemoryPool to allocate CPU memory for the metadata
[out]messagethe deserialized message, body still on device

This function reads the message metadata into host memory, but leaves the message body on the device

◆ ReadRecordBatch()

Status arrow::gpu::ReadRecordBatch ( const std::shared_ptr< Schema > &  schema,
const std::shared_ptr< CudaBuffer > &  buffer,
MemoryPool pool,
std::shared_ptr< RecordBatch > *  out 

ReadRecordBatch specialized to handle metadata on CUDA device.

[in]schemathe Schema for the record batch
[in]buffera CudaBuffer containing the complete IPC message
[in]poola MemoryPool to use for allocating space for the metadata
[out]outthe reconstructed RecordBatch, with device pointers

◆ SerializeRecordBatch()

Status arrow::gpu::SerializeRecordBatch ( const RecordBatch batch,
CudaContext ctx,
std::shared_ptr< CudaBuffer > *  out 

Write record batch message to GPU device memory.

[in]batchrecord batch to write
[in]ctxCudaContext to allocate device memory from
[out]outthe returned device buffer which contains the record batch message