Skip to content

Fuse RLE decoding and view gathering for StringView dictionary#9586

Draft
Dandandan wants to merge 11 commits intoapache:mainfrom
Dandandan:pr/fuse-rle-view-gathering
Draft

Fuse RLE decoding and view gathering for StringView dictionary#9586
Dandandan wants to merge 11 commits intoapache:mainfrom
Dandandan:pr/fuse-rle-view-gathering

Conversation

@Dandandan
Copy link
Contributor

Which issue does this PR close?

Closes #9582

Rationale

StringView dictionary decoding currently goes through an intermediate index buffer: decode indices → gather views. For RLE runs (which are common), this roundtrip is unnecessary.

What changes are included in this PR?

  • For RLE runs, use repeat_n to fill views directly, skipping the index buffer entirely
  • Pre-reserve output views capacity before the decode loop, eliminating per-chunk reallocation
  • Skip buffer management when all dictionary views are inlined (≤12 bytes)
  • Pre-reserve offsets in ByteArray dictionary decoding
  • Use branchless index clamping in view gather paths

Based on #9583 (branchless clamping PR).

Are there any user-facing changes?

No.

🤖 Generated with Claude Code

Dandandan and others added 9 commits March 19, 2026 20:56
When bit_width guarantees all possible indices fit within the dictionary,
use unchecked indexing to allow LLVM to unroll the dict gather loop 4x
with paired loads/stores instead of scalar with per-element bounds checks.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add RleDecodedBatch enum and get_batch_direct method that exposes RLE vs
bit-packed batches via callback, allowing callers to handle each case
optimally without going through the index buffer.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace if/else checked/unchecked branching with a single branchless
.min(max_idx) clamp. This prevents UB on corrupt parquet files while
avoiding per-element bounds checks.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
These are only used by the arrow dictionary_index decoder. Without
the arrow feature, they appear as dead code to clippy.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
When bit_width guarantees all possible indices fit within the dictionary,
use unchecked access to eliminate per-element bounds checks. Also skip
buffer management when all dictionary views are inlined (<=12 bytes).

Generates a clean 8-instruction gather loop for the common case
(all_indices_valid + base_buffer_idx=0) and a branchless 14-instruction
loop for the non-zero buffer offset case.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Reserve the full output capacity upfront before the decode loop,
eliminating per-chunk reallocation checks inside extend. This gives
a ~25% speedup for dictionary-encoded StringView reads.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
For RLE runs, look up the dict view once and repeat directly with
repeat_n, skipping the index buffer entirely. For bit-packed runs,
decode indices to a stack-local buffer and gather immediately.

Skip buffer management when all dictionary views are inlined (<=12 bytes).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Apply .min(max_idx) clamping in gather_views to prevent UB on corrupt
data while keeping the hot loop branchless.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Reserve offsets capacity upfront before the decode loop to avoid
per-chunk reallocation. ~3.5% improvement for StringArray dict reads.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@github-actions github-actions bot added the parquet Changes to the parquet crate label Mar 19, 2026
@Dandandan Dandandan marked this pull request as draft March 19, 2026 20:18
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@Dandandan
Copy link
Contributor Author

run benchmark arrow_reader_clickbench

@adriangbot
Copy link

🤖 Arrow criterion benchmark running (GKE) | trigger
Linux bench-c4093151654-471-d98l6 6.12.55+ #1 SMP Sun Feb 1 08:59:41 UTC 2026 aarch64 GNU/Linux
Comparing pr/fuse-rle-view-gathering (6e80006) to 88422cb (merge-base) diff
BENCH_NAME=arrow_reader_clickbench
BENCH_COMMAND=cargo bench --features=arrow,async,test_common,experimental,object_store --bench arrow_reader_clickbench
BENCH_FILTER=
Results will be posted here when complete

@adriangbot
Copy link

🤖 Arrow criterion benchmark completed (GKE) | trigger

Details

group                                             main                                   pr_fuse-rle-view-gathering
-----                                             ----                                   --------------------------
arrow_reader_clickbench/async/Q1                  1.00   1095.5±6.91µs        ? ?/sec    1.00   1092.0±7.29µs        ? ?/sec
arrow_reader_clickbench/async/Q10                 1.09      6.9±0.26ms        ? ?/sec    1.00      6.3±0.27ms        ? ?/sec
arrow_reader_clickbench/async/Q11                 1.05      8.0±0.27ms        ? ?/sec    1.00      7.6±0.19ms        ? ?/sec
arrow_reader_clickbench/async/Q12                 1.04     15.1±0.09ms        ? ?/sec    1.00     14.6±0.21ms        ? ?/sec
arrow_reader_clickbench/async/Q13                 1.04     17.8±0.17ms        ? ?/sec    1.00     17.1±0.40ms        ? ?/sec
arrow_reader_clickbench/async/Q14                 1.04     16.4±0.37ms        ? ?/sec    1.00     15.8±0.39ms        ? ?/sec
arrow_reader_clickbench/async/Q19                 1.02      3.1±0.07ms        ? ?/sec    1.00      3.1±0.06ms        ? ?/sec
arrow_reader_clickbench/async/Q20                 1.01     72.4±0.59ms        ? ?/sec    1.00     71.8±0.47ms        ? ?/sec
arrow_reader_clickbench/async/Q21                 1.32    106.9±1.76ms        ? ?/sec    1.00     80.8±0.37ms        ? ?/sec
arrow_reader_clickbench/async/Q22                 1.19    133.6±6.90ms        ? ?/sec    1.00    112.2±1.56ms        ? ?/sec
arrow_reader_clickbench/async/Q23                 1.03    247.3±2.57ms        ? ?/sec    1.00    240.7±2.61ms        ? ?/sec
arrow_reader_clickbench/async/Q24                 1.02     20.2±0.35ms        ? ?/sec    1.00     19.7±0.39ms        ? ?/sec
arrow_reader_clickbench/async/Q27                 1.00     58.0±0.27ms        ? ?/sec    1.00     58.1±0.19ms        ? ?/sec
arrow_reader_clickbench/async/Q28                 1.02     59.6±0.73ms        ? ?/sec    1.00     58.2±0.52ms        ? ?/sec
arrow_reader_clickbench/async/Q30                 1.03     18.8±0.22ms        ? ?/sec    1.00     18.2±0.18ms        ? ?/sec
arrow_reader_clickbench/async/Q36                 1.01     15.5±0.36ms        ? ?/sec    1.00     15.3±0.43ms        ? ?/sec
arrow_reader_clickbench/async/Q37                 1.01      5.5±0.10ms        ? ?/sec    1.00      5.4±0.13ms        ? ?/sec
arrow_reader_clickbench/async/Q38                 1.00     13.6±0.20ms        ? ?/sec    1.01     13.8±0.27ms        ? ?/sec
arrow_reader_clickbench/async/Q39                 1.00     24.9±0.24ms        ? ?/sec    1.00     25.0±0.25ms        ? ?/sec
arrow_reader_clickbench/async/Q40                 1.00      5.8±0.10ms        ? ?/sec    1.00      5.8±0.13ms        ? ?/sec
arrow_reader_clickbench/async/Q41                 1.00      5.0±0.09ms        ? ?/sec    1.00      5.0±0.09ms        ? ?/sec
arrow_reader_clickbench/async/Q42                 1.00      3.5±0.04ms        ? ?/sec    1.01      3.6±0.06ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q1     1.00   1063.3±6.52µs        ? ?/sec    1.00   1065.8±2.37µs        ? ?/sec
arrow_reader_clickbench/async_object_store/Q10    1.09      6.8±0.22ms        ? ?/sec    1.00      6.2±0.19ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q11    1.09      8.0±0.04ms        ? ?/sec    1.00      7.3±0.05ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q12    1.02     14.8±0.32ms        ? ?/sec    1.00     14.5±0.29ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q13    1.03     17.3±0.43ms        ? ?/sec    1.00     16.8±0.48ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q14    1.04     16.3±0.39ms        ? ?/sec    1.00     15.8±0.40ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q19    1.00      3.0±0.07ms        ? ?/sec    1.00      3.0±0.05ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q20    1.00     71.8±0.59ms        ? ?/sec    1.00     71.7±0.50ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q21    1.00     81.0±0.32ms        ? ?/sec    1.00     80.6±0.59ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q22    1.02     98.1±0.86ms        ? ?/sec    1.00     96.6±1.01ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q23    1.02    232.4±2.35ms        ? ?/sec    1.00    227.0±2.07ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q24    1.03     20.0±0.11ms        ? ?/sec    1.00     19.5±0.07ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q27    1.01     57.1±0.33ms        ? ?/sec    1.00     56.7±0.64ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q28    1.03     58.1±0.83ms        ? ?/sec    1.00     56.6±0.74ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q30    1.04     18.5±0.25ms        ? ?/sec    1.00     17.8±0.19ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q36    1.00     14.6±0.45ms        ? ?/sec    1.01     14.7±0.54ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q37    1.02      5.5±0.09ms        ? ?/sec    1.00      5.3±0.07ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q38    1.00     13.1±0.12ms        ? ?/sec    1.03     13.5±0.12ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q39    1.01     24.0±0.23ms        ? ?/sec    1.00     23.7±0.46ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q40    1.02      5.6±0.13ms        ? ?/sec    1.00      5.5±0.15ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q41    1.01      4.9±0.09ms        ? ?/sec    1.00      4.8±0.09ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q42    1.00      3.5±0.05ms        ? ?/sec    1.00      3.5±0.05ms        ? ?/sec
arrow_reader_clickbench/sync/Q1                   1.00    866.6±2.34µs        ? ?/sec    1.02    880.2±5.76µs        ? ?/sec
arrow_reader_clickbench/sync/Q10                  1.09      5.3±0.04ms        ? ?/sec    1.00      4.8±0.01ms        ? ?/sec
arrow_reader_clickbench/sync/Q11                  1.10      6.2±0.04ms        ? ?/sec    1.00      5.7±0.08ms        ? ?/sec
arrow_reader_clickbench/sync/Q12                  1.04     22.3±0.41ms        ? ?/sec    1.00     21.4±0.34ms        ? ?/sec
arrow_reader_clickbench/sync/Q13                  1.00     24.9±0.41ms        ? ?/sec    1.14     28.4±1.08ms        ? ?/sec
arrow_reader_clickbench/sync/Q14                  1.04     23.6±0.37ms        ? ?/sec    1.00     22.7±0.30ms        ? ?/sec
arrow_reader_clickbench/sync/Q19                  1.00      2.8±0.06ms        ? ?/sec    1.00      2.7±0.02ms        ? ?/sec
arrow_reader_clickbench/sync/Q20                  1.01    124.2±0.53ms        ? ?/sec    1.00    122.8±0.99ms        ? ?/sec
arrow_reader_clickbench/sync/Q21                  1.07     98.7±0.79ms        ? ?/sec    1.00     92.4±0.60ms        ? ?/sec
arrow_reader_clickbench/sync/Q22                  1.07    146.3±2.59ms        ? ?/sec    1.00    137.0±1.16ms        ? ?/sec
arrow_reader_clickbench/sync/Q23                  1.02    278.1±8.83ms        ? ?/sec    1.00    273.4±7.63ms        ? ?/sec
arrow_reader_clickbench/sync/Q24                  1.05     28.3±0.50ms        ? ?/sec    1.00     26.8±0.52ms        ? ?/sec
arrow_reader_clickbench/sync/Q27                  1.01    108.5±0.81ms        ? ?/sec    1.00    107.5±0.78ms        ? ?/sec
arrow_reader_clickbench/sync/Q28                  1.04    109.5±0.95ms        ? ?/sec    1.00    105.4±0.42ms        ? ?/sec
arrow_reader_clickbench/sync/Q30                  1.05     19.5±0.03ms        ? ?/sec    1.00     18.6±0.15ms        ? ?/sec
arrow_reader_clickbench/sync/Q36                  1.00     22.9±0.29ms        ? ?/sec    1.00     22.9±0.38ms        ? ?/sec
arrow_reader_clickbench/sync/Q37                  1.00      7.0±0.08ms        ? ?/sec    1.12      7.8±0.09ms        ? ?/sec
arrow_reader_clickbench/sync/Q38                  1.01     11.6±0.18ms        ? ?/sec    1.00     11.5±0.21ms        ? ?/sec
arrow_reader_clickbench/sync/Q39                  1.00     21.4±0.35ms        ? ?/sec    1.01     21.5±0.21ms        ? ?/sec
arrow_reader_clickbench/sync/Q40                  1.02      5.4±0.02ms        ? ?/sec    1.00      5.3±0.05ms        ? ?/sec
arrow_reader_clickbench/sync/Q41                  1.02      5.8±0.06ms        ? ?/sec    1.00      5.7±0.09ms        ? ?/sec
arrow_reader_clickbench/sync/Q42                  1.00      4.4±0.06ms        ? ?/sec    1.00      4.4±0.07ms        ? ?/sec

Resource Usage

base (merge-base)

Metric Value
Wall time 785.6s
Peak memory 3.1 GiB
Avg memory 3.0 GiB
CPU user 707.4s
CPU sys 78.0s
Disk read 0 B
Disk write 1.9 GiB

branch

Metric Value
Wall time 781.5s
Peak memory 3.2 GiB
Avg memory 3.1 GiB
CPU user 714.2s
CPU sys 67.2s
Disk read 0 B
Disk write 172.5 MiB

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

parquet Changes to the parquet crate

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Fuse RLE decoding and view gathering for StringView dictionary decoding

2 participants