struct PackedDecoder {
data: Bytes,
data_offset: usize,
rle_left: usize,
rle_value: bool,
packed_count: usize,
packed_offset: usize,
}Expand description
An optimized decoder for decoding RLE and BIT_PACKED data with a bit width of 1 directly into a bitmask
This is significantly faster than decoding the data into [i16] and then computing
a bitmask from this, as not only can it skip this buffer allocation and construction,
but it can exploit properties of the encoded data to reduce work further
In particular:
- Packed runs are already bitmask encoded and can simply be appended
- Runs of 1 or 0 bits can be efficiently appended with byte (or larger) operations
Fields§
§data: Bytes§data_offset: usize§rle_left: usize§rle_value: bool§packed_count: usize§packed_offset: usizeImplementations§
Source§impl PackedDecoder
impl PackedDecoder
fn next_rle_block(&mut self) -> Result<()>
Sourcefn decode_header(&mut self) -> Result<i64>
fn decode_header(&mut self) -> Result<i64>
Decodes a VLQ encoded little endian integer and returns it
Source§impl PackedDecoder
impl PackedDecoder
fn new() -> Self
fn set_data(&mut self, encoding: Encoding, data: Bytes)
Sourcefn try_consume_all_valid(&mut self, len: usize) -> Result<Option<usize>>
fn try_consume_all_valid(&mut self, len: usize) -> Result<Option<usize>>
Try to consume len levels if all are valid (max definition level).
Returns Ok(Some(count)) if successfully consumed count all-valid levels.
Returns Ok(None) if there are any nulls or packed data that prevents fast path.
Note: On None, the decoder state may have advanced to the next RLE block,
but only if rle_left was zero (i.e., the block would have been loaded
on the next read anyway).