Apache Arrow (C++)
A columnar in-memory analytics layer designed to accelerate big data.
Classes | Functions
arrow::gpu Namespace Reference

Classes

class  CudaBuffer
 An Arrow buffer located on a GPU device. More...
 
class  CudaBufferReader
 File interface for zero-copy read from CUDA buffers. More...
 
class  CudaBufferWriter
 File interface for writing to CUDA buffers, with optional buffering. More...
 
class  CudaContext
 Friendlier interface to the CUDA driver API. More...
 
struct  CudaDeviceInfo
 
class  CudaDeviceManager
 
class  CudaHostBuffer
 Device-accessible CPU memory created using cudaHostAlloc. More...
 
class  CudaIpcMemHandle
 

Functions

Status SerializeRecordBatch (const RecordBatch &batch, CudaContext *ctx, std::shared_ptr< CudaBuffer > *out)
 Write record batch message to GPU device memory. More...
 
Status ReadMessage (CudaBufferReader *reader, MemoryPool *pool, std::unique_ptr< ipc::Message > *message)
 Read Arrow IPC message located on GPU device. More...
 
Status ReadRecordBatch (const std::shared_ptr< Schema > &schema, const std::shared_ptr< CudaBuffer > &buffer, MemoryPool *pool, std::shared_ptr< RecordBatch > *out)
 ReadRecordBatch specialized to handle metadata on CUDA device. More...
 
Status AllocateCudaHostBuffer (const int64_t size, std::shared_ptr< CudaHostBuffer > *out)
 Allocate CUDA-accessible memory on CPU host. More...
 

Function Documentation

◆ AllocateCudaHostBuffer()

Status arrow::gpu::AllocateCudaHostBuffer ( const int64_t  size,
std::shared_ptr< CudaHostBuffer > *  out 
)

Allocate CUDA-accessible memory on CPU host.

Parameters
[in]sizenumber of bytes
[out]outthe allocated buffer
Returns
Status

◆ ReadMessage()

Status arrow::gpu::ReadMessage ( CudaBufferReader reader,
MemoryPool pool,
std::unique_ptr< ipc::Message > *  message 
)

Read Arrow IPC message located on GPU device.

Parameters
[in]readera CudaBufferReader
[in]poola MemoryPool to allocate CPU memory for the metadata
[out]messagethe deserialized message, body still on device

This function reads the message metadata into host memory, but leaves the message body on the device

◆ ReadRecordBatch()

Status arrow::gpu::ReadRecordBatch ( const std::shared_ptr< Schema > &  schema,
const std::shared_ptr< CudaBuffer > &  buffer,
MemoryPool pool,
std::shared_ptr< RecordBatch > *  out 
)

ReadRecordBatch specialized to handle metadata on CUDA device.

Parameters
[in]schemathe Schema for the record batch
[in]buffera CudaBuffer containing the complete IPC message
[in]poola MemoryPool to use for allocating space for the metadata
[out]outthe reconstructed RecordBatch, with device pointers

◆ SerializeRecordBatch()

Status arrow::gpu::SerializeRecordBatch ( const RecordBatch batch,
CudaContext ctx,
std::shared_ptr< CudaBuffer > *  out 
)

Write record batch message to GPU device memory.

Parameters
[in]batchrecord batch to write
[in]ctxCudaContext to allocate device memory from
[out]outthe returned device buffer which contains the record batch message
Returns
Status