Apache Arrow (C++)
A columnar in-memory analytics layer designed to accelerate big data.
Public Member Functions | Protected Member Functions | Protected Attributes | List of all members
arrow::gpu::CudaBuffer Class Reference

An Arrow buffer located on a GPU device. More...

#include <arrow/gpu/cuda_memory.h>

Inheritance diagram for arrow::gpu::CudaBuffer:
arrow::Buffer

Public Member Functions

 CudaBuffer (uint8_t *data, int64_t size, const std::shared_ptr< CudaContext > &context, bool own_data=false, bool is_ipc=false)
 
 CudaBuffer (const std::shared_ptr< CudaBuffer > &parent, const int64_t offset, const int64_t size)
 
 ~CudaBuffer ()
 
Status CopyToHost (const int64_t position, const int64_t nbytes, uint8_t *out) const
 Copy memory from GPU device to CPU host. More...
 
Status CopyFromHost (const int64_t position, const uint8_t *data, int64_t nbytes)
 Copy memory to device at position. More...
 
virtual Status ExportForIpc (std::unique_ptr< CudaIpcMemHandle > *handle)
 Expose this device buffer as IPC memory which can be used in other processes. More...
 
std::shared_ptr< CudaContextcontext () const
 
- Public Member Functions inherited from arrow::Buffer
 Buffer (const uint8_t *data, int64_t size)
 Construct from buffer and size without copying memory. More...
 
 Buffer (const std::string &data)
 Construct from std::string without copying memory. More...
 
virtual ~Buffer ()=default
 
 Buffer (const std::shared_ptr< Buffer > &parent, const int64_t offset, const int64_t size)
 An offset into data that is owned by another buffer, but we want to be able to retain a valid pointer to it even after other shared_ptr's to the parent buffer have been destroyed. More...
 
bool is_mutable () const
 
bool Equals (const Buffer &other, int64_t nbytes) const
 Return true if both buffers are the same size and contain the same bytes up to the number of compared bytes. More...
 
bool Equals (const Buffer &other) const
 Return true if both buffers are the same size and contain the same bytes. More...
 
Status Copy (const int64_t start, const int64_t nbytes, MemoryPool *pool, std::shared_ptr< Buffer > *out) const
 Copy a section of the buffer into a new Buffer. More...
 
Status Copy (const int64_t start, const int64_t nbytes, std::shared_ptr< Buffer > *out) const
 Copy a section of the buffer using the default memory pool into a new Buffer. More...
 
int64_t capacity () const
 
const uint8_t * data () const
 
uint8_t * mutable_data ()
 
int64_t size () const
 
std::shared_ptr< Bufferparent () const
 

Protected Member Functions

virtual Status Close ()
 

Protected Attributes

std::shared_ptr< CudaContextcontext_
 
bool own_data_
 
bool is_ipc_
 
- Protected Attributes inherited from arrow::Buffer
bool is_mutable_
 
const uint8_t * data_
 
uint8_t * mutable_data_
 
int64_t size_
 
int64_t capacity_
 
std::shared_ptr< Bufferparent_
 

Detailed Description

An Arrow buffer located on a GPU device.

Be careful using this in any Arrow code which may not be GPU-aware

Constructor & Destructor Documentation

◆ CudaBuffer() [1/2]

arrow::gpu::CudaBuffer::CudaBuffer ( uint8_t *  data,
int64_t  size,
const std::shared_ptr< CudaContext > &  context,
bool  own_data = false,
bool  is_ipc = false 
)

◆ CudaBuffer() [2/2]

arrow::gpu::CudaBuffer::CudaBuffer ( const std::shared_ptr< CudaBuffer > &  parent,
const int64_t  offset,
const int64_t  size 
)

◆ ~CudaBuffer()

arrow::gpu::CudaBuffer::~CudaBuffer ( )

Member Function Documentation

◆ Close()

virtual Status arrow::gpu::CudaBuffer::Close ( )
protectedvirtual

◆ context()

std::shared_ptr<CudaContext> arrow::gpu::CudaBuffer::context ( ) const
inline

◆ CopyFromHost()

Status arrow::gpu::CudaBuffer::CopyFromHost ( const int64_t  position,
const uint8_t *  data,
int64_t  nbytes 
)

Copy memory to device at position.

Parameters
[in]positionstart position to copy bytes
[in]datathe host data to copy
[in]nbytesnumber of bytes to copy
Returns
Status

◆ CopyToHost()

Status arrow::gpu::CudaBuffer::CopyToHost ( const int64_t  position,
const int64_t  nbytes,
uint8_t *  out 
) const

Copy memory from GPU device to CPU host.

Parameters
[out]outa pre-allocated output buffer
Returns
Status

◆ ExportForIpc()

virtual Status arrow::gpu::CudaBuffer::ExportForIpc ( std::unique_ptr< CudaIpcMemHandle > *  handle)
virtual

Expose this device buffer as IPC memory which can be used in other processes.

Parameters
[out]handlethe exported IPC handle
Returns
Status
Note
After calling this function, this device memory will not be freed when the CudaBuffer is destructed

Member Data Documentation

◆ context_

std::shared_ptr<CudaContext> arrow::gpu::CudaBuffer::context_
protected

◆ is_ipc_

bool arrow::gpu::CudaBuffer::is_ipc_
protected

◆ own_data_

bool arrow::gpu::CudaBuffer::own_data_
protected

The documentation for this class was generated from the following file: