Skip to content

parquet: optimize CachedArrayReader byte-array coalescing#9743

Open
ClSlaid wants to merge 1 commit intoapache:mainfrom
ClSlaid:issue-9060-cached-array-reader-byte-coalescer
Open

parquet: optimize CachedArrayReader byte-array coalescing#9743
ClSlaid wants to merge 1 commit intoapache:mainfrom
ClSlaid:issue-9060-cached-array-reader-byte-coalescer

Conversation

@ClSlaid
Copy link
Copy Markdown
Contributor

@ClSlaid ClSlaid commented Apr 16, 2026

When CachedArrayReader builds output from multiple cached batches, the old path materialized filtered byte arrays and then concatenated them. Replace that path for Utf8/Binary arrays with a direct coalescer that builds offsets, values, and validity in one output array, while keeping the existing generic MutableArrayData path for other types.

Add a dedicated CachedArrayReader benchmark and a sparse string regression test so this path is measured directly and covered independently of broader parquet reader benchmarks.

Benchmark vs main:

  • cached_array_reader/utf8_sparse_cross_batch_4m_rows/consume_batch: 11.949 ms -> 4.153 ms (-65.2%)
  • arrow_reader_clickbench/sync/Q24 (same filter/projection as ClickBench Q26): 28.377 ms -> 28.443 ms (+0.2%, no measurable change)

Which issue does this PR close?

Rationale for this change

What changes are included in this PR?

Are these changes tested?

Are there any user-facing changes?

When CachedArrayReader builds output from multiple cached batches, the old path materialized filtered byte arrays and then concatenated them. Replace that path for Utf8/Binary arrays with a direct coalescer that builds offsets, values, and validity in one output array, while keeping the existing generic MutableArrayData path for other types.

Add a dedicated CachedArrayReader benchmark and a sparse string regression test so this path is measured directly and covered independently of broader parquet reader benchmarks.

Benchmark vs main:
- cached_array_reader/utf8_sparse_cross_batch_4m_rows/consume_batch: 11.949 ms -> 4.153 ms (-65.2%)
- arrow_reader_clickbench/sync/Q24 (same filter/projection as ClickBench Q26): 28.377 ms -> 28.443 ms (+0.2%, no measurable change)

Signed-off-by: cl <cailue@apache.org>
@github-actions github-actions Bot added the parquet Changes to the parquet crate label Apr 16, 2026
@ClSlaid
Copy link
Copy Markdown
Contributor Author

ClSlaid commented Apr 17, 2026

@alamb I've tried to optimize with GPT 5.4, the improvement is not that obvious in the original test case you gave. So I let it wrote a new benchmark and optimized on it.

However, I'm still not really confident about the current implementation, so please have a look.

@alamb
Copy link
Copy Markdown
Contributor

alamb commented Apr 22, 2026

@XiangpengHao can you help review this PR?

@alamb
Copy link
Copy Markdown
Contributor

alamb commented Apr 22, 2026

run benchmarks arrow_reader_clickbench

@adriangbot
Copy link
Copy Markdown

🤖 Arrow criterion benchmark running (GKE) | trigger
Instance: c4a-highmem-16 (12 vCPU / 65 GiB) | Linux bench-c4296632418-1745-m2pn7 6.12.55+ #1 SMP Sun Feb 1 08:59:41 UTC 2026 aarch64 GNU/Linux

CPU Details (lscpu)
Architecture:                            aarch64
CPU op-mode(s):                          64-bit
Byte Order:                              Little Endian
CPU(s):                                  16
On-line CPU(s) list:                     0-15
Vendor ID:                               ARM
Model name:                              Neoverse-V2
Model:                                   1
Thread(s) per core:                      1
Core(s) per cluster:                     16
Socket(s):                               -
Cluster(s):                              1
Stepping:                                r0p1
BogoMIPS:                                2000.00
Flags:                                   fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm sb paca pacg dcpodp sve2 sveaes svepmull svebitperm svesha3 svesm4 flagm2 frint svei8mm svebf16 i8mm bf16 dgh rng bti
L1d cache:                               1 MiB (16 instances)
L1i cache:                               1 MiB (16 instances)
L2 cache:                                32 MiB (16 instances)
L3 cache:                                80 MiB (1 instance)
NUMA node(s):                            1
NUMA node0 CPU(s):                       0-15
Vulnerability Gather data sampling:      Not affected
Vulnerability Indirect target selection: Not affected
Vulnerability Itlb multihit:             Not affected
Vulnerability L1tf:                      Not affected
Vulnerability Mds:                       Not affected
Vulnerability Meltdown:                  Not affected
Vulnerability Mmio stale data:           Not affected
Vulnerability Reg file data sampling:    Not affected
Vulnerability Retbleed:                  Not affected
Vulnerability Spec rstack overflow:      Not affected
Vulnerability Spec store bypass:         Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:                Mitigation; __user pointer sanitization
Vulnerability Spectre v2:                Mitigation; CSV2, BHB
Vulnerability Srbds:                     Not affected
Vulnerability Tsa:                       Not affected
Vulnerability Tsx async abort:           Not affected
Vulnerability Vmscape:                   Not affected

Comparing issue-9060-cached-array-reader-byte-coalescer (5e78671) to d7d9ad3 (merge-base) diff
BENCH_NAME=arrow_reader_clickbench
BENCH_COMMAND=cargo bench --features=arrow,async,test_common,experimental,object_store --bench arrow_reader_clickbench
BENCH_FILTER=
Results will be posted here when complete


File an issue against this benchmark runner

@adriangbot
Copy link
Copy Markdown

🤖 Arrow criterion benchmark completed (GKE) | trigger

Instance: c4a-highmem-16 (12 vCPU / 65 GiB)

CPU Details (lscpu)
Architecture:                            aarch64
CPU op-mode(s):                          64-bit
Byte Order:                              Little Endian
CPU(s):                                  16
On-line CPU(s) list:                     0-15
Vendor ID:                               ARM
Model name:                              Neoverse-V2
Model:                                   1
Thread(s) per core:                      1
Core(s) per cluster:                     16
Socket(s):                               -
Cluster(s):                              1
Stepping:                                r0p1
BogoMIPS:                                2000.00
Flags:                                   fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm sb paca pacg dcpodp sve2 sveaes svepmull svebitperm svesha3 svesm4 flagm2 frint svei8mm svebf16 i8mm bf16 dgh rng bti
L1d cache:                               1 MiB (16 instances)
L1i cache:                               1 MiB (16 instances)
L2 cache:                                32 MiB (16 instances)
L3 cache:                                80 MiB (1 instance)
NUMA node(s):                            1
NUMA node0 CPU(s):                       0-15
Vulnerability Gather data sampling:      Not affected
Vulnerability Indirect target selection: Not affected
Vulnerability Itlb multihit:             Not affected
Vulnerability L1tf:                      Not affected
Vulnerability Mds:                       Not affected
Vulnerability Meltdown:                  Not affected
Vulnerability Mmio stale data:           Not affected
Vulnerability Reg file data sampling:    Not affected
Vulnerability Retbleed:                  Not affected
Vulnerability Spec rstack overflow:      Not affected
Vulnerability Spec store bypass:         Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:                Mitigation; __user pointer sanitization
Vulnerability Spectre v2:                Mitigation; CSV2, BHB
Vulnerability Srbds:                     Not affected
Vulnerability Tsa:                       Not affected
Vulnerability Tsx async abort:           Not affected
Vulnerability Vmscape:                   Not affected
Details

group                                             issue-9060-cached-array-reader-byte-coalescer    main
-----                                             ---------------------------------------------    ----
arrow_reader_clickbench/async/Q1                  1.02   1101.5±7.22µs        ? ?/sec              1.00   1079.0±2.72µs        ? ?/sec
arrow_reader_clickbench/async/Q10                 1.00      6.6±0.04ms        ? ?/sec              1.00      6.6±0.03ms        ? ?/sec
arrow_reader_clickbench/async/Q11                 1.00      7.5±0.03ms        ? ?/sec              1.02      7.6±0.05ms        ? ?/sec
arrow_reader_clickbench/async/Q12                 1.00     14.4±0.05ms        ? ?/sec              1.00     14.4±0.04ms        ? ?/sec
arrow_reader_clickbench/async/Q13                 1.00     17.0±0.07ms        ? ?/sec              1.00     17.1±0.07ms        ? ?/sec
arrow_reader_clickbench/async/Q14                 1.00     16.0±0.07ms        ? ?/sec              1.00     15.9±0.05ms        ? ?/sec
arrow_reader_clickbench/async/Q19                 1.00      3.0±0.03ms        ? ?/sec              1.02      3.1±0.02ms        ? ?/sec
arrow_reader_clickbench/async/Q20                 1.00     71.7±0.33ms        ? ?/sec              1.33     95.5±1.13ms        ? ?/sec
arrow_reader_clickbench/async/Q21                 1.00     80.4±0.31ms        ? ?/sec              1.33    107.2±9.26ms        ? ?/sec
arrow_reader_clickbench/async/Q22                 1.00    105.2±7.99ms        ? ?/sec              1.33    140.5±6.19ms        ? ?/sec
arrow_reader_clickbench/async/Q23                 1.00    245.8±1.66ms        ? ?/sec              1.01    248.5±6.11ms        ? ?/sec
arrow_reader_clickbench/async/Q24                 1.00     19.4±0.09ms        ? ?/sec              1.01     19.7±2.29ms        ? ?/sec
arrow_reader_clickbench/async/Q27                 1.00     58.1±0.36ms        ? ?/sec              1.00     58.3±0.65ms        ? ?/sec
arrow_reader_clickbench/async/Q28                 1.01     57.7±0.26ms        ? ?/sec              1.00     57.1±0.38ms        ? ?/sec
arrow_reader_clickbench/async/Q30                 1.00     18.1±0.05ms        ? ?/sec              1.01     18.3±0.05ms        ? ?/sec
arrow_reader_clickbench/async/Q36                 1.00     15.2±0.14ms        ? ?/sec              1.03     15.7±0.42ms        ? ?/sec
arrow_reader_clickbench/async/Q37                 1.01      5.5±0.02ms        ? ?/sec              1.00      5.4±0.04ms        ? ?/sec
arrow_reader_clickbench/async/Q38                 1.00     13.4±0.17ms        ? ?/sec              1.05     14.0±0.35ms        ? ?/sec
arrow_reader_clickbench/async/Q39                 1.00     24.3±0.29ms        ? ?/sec              1.07     26.0±0.60ms        ? ?/sec
arrow_reader_clickbench/async/Q40                 1.00      5.5±0.03ms        ? ?/sec              1.07      5.9±0.09ms        ? ?/sec
arrow_reader_clickbench/async/Q41                 1.00      4.8±0.03ms        ? ?/sec              1.05      5.1±0.06ms        ? ?/sec
arrow_reader_clickbench/async/Q42                 1.00      3.5±0.02ms        ? ?/sec              1.02      3.5±0.02ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q1     1.01   1076.9±6.32µs        ? ?/sec              1.00   1066.1±5.34µs        ? ?/sec
arrow_reader_clickbench/async_object_store/Q10    1.02      6.6±0.04ms        ? ?/sec              1.00      6.5±0.05ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q11    1.01      7.6±0.09ms        ? ?/sec              1.00      7.5±0.08ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q12    1.01     14.4±0.06ms        ? ?/sec              1.00     14.4±0.06ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q13    1.02     17.3±0.08ms        ? ?/sec              1.00     17.0±0.09ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q14    1.01     15.9±0.05ms        ? ?/sec              1.00     15.8±0.05ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q19    1.00      2.9±0.02ms        ? ?/sec              1.01      3.0±0.03ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q20    1.00     71.6±0.44ms        ? ?/sec              1.01     72.1±0.55ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q21    1.00     80.1±0.37ms        ? ?/sec              1.00     80.4±0.55ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q22    1.00     98.5±2.06ms        ? ?/sec              1.00     98.8±0.64ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q23    1.02    221.8±0.43ms        ? ?/sec              1.00    217.0±0.59ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q24    1.00     19.2±0.17ms        ? ?/sec              1.00     19.3±0.10ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q27    1.01     57.8±0.31ms        ? ?/sec              1.00     57.2±0.49ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q28    1.02     57.9±0.28ms        ? ?/sec              1.00     56.8±0.36ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q30    1.00     17.9±0.06ms        ? ?/sec              1.01     18.1±0.05ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q36    1.00     15.0±0.15ms        ? ?/sec              1.03     15.3±0.29ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q37    1.02      5.4±0.02ms        ? ?/sec              1.00      5.3±0.03ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q38    1.00     13.0±0.15ms        ? ?/sec              1.03     13.5±0.26ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q39    1.00     23.5±0.26ms        ? ?/sec              1.02     24.1±0.44ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q40    1.00      5.3±0.03ms        ? ?/sec              1.06      5.6±0.06ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q41    1.00      4.6±0.03ms        ? ?/sec              1.06      4.9±0.04ms        ? ?/sec
arrow_reader_clickbench/async_object_store/Q42    1.00      3.3±0.02ms        ? ?/sec              1.04      3.4±0.02ms        ? ?/sec
arrow_reader_clickbench/sync/Q1                   1.00    857.9±1.48µs        ? ?/sec              1.03   884.3±21.21µs        ? ?/sec
arrow_reader_clickbench/sync/Q10                  1.00      5.0±0.01ms        ? ?/sec              1.02      5.1±0.01ms        ? ?/sec
arrow_reader_clickbench/sync/Q11                  1.00      5.9±0.04ms        ? ?/sec              1.02      6.0±0.02ms        ? ?/sec
arrow_reader_clickbench/sync/Q12                  1.00     21.3±0.06ms        ? ?/sec              1.01     21.5±0.06ms        ? ?/sec
arrow_reader_clickbench/sync/Q13                  1.01     30.3±0.18ms        ? ?/sec              1.00     30.1±0.16ms        ? ?/sec
arrow_reader_clickbench/sync/Q14                  1.01     22.8±0.07ms        ? ?/sec              1.00     22.7±0.04ms        ? ?/sec
arrow_reader_clickbench/sync/Q19                  1.00      2.6±0.02ms        ? ?/sec              1.02      2.6±0.02ms        ? ?/sec
arrow_reader_clickbench/sync/Q20                  1.00    122.8±0.16ms        ? ?/sec              1.00    123.2±0.18ms        ? ?/sec
arrow_reader_clickbench/sync/Q21                  1.00     92.5±0.13ms        ? ?/sec              1.01     93.3±0.09ms        ? ?/sec
arrow_reader_clickbench/sync/Q22                  1.00    138.5±0.17ms        ? ?/sec              1.00    138.8±0.25ms        ? ?/sec
arrow_reader_clickbench/sync/Q23                  1.00   276.6±11.88ms        ? ?/sec              1.03   283.6±16.14ms        ? ?/sec
arrow_reader_clickbench/sync/Q24                  1.00     26.6±0.05ms        ? ?/sec              1.01     26.8±0.04ms        ? ?/sec
arrow_reader_clickbench/sync/Q27                  1.00    107.4±0.12ms        ? ?/sec              1.01    108.7±0.13ms        ? ?/sec
arrow_reader_clickbench/sync/Q28                  1.00    104.7±0.16ms        ? ?/sec              1.00    105.0±0.09ms        ? ?/sec
arrow_reader_clickbench/sync/Q30                  1.00     18.4±0.06ms        ? ?/sec              1.01     18.6±0.07ms        ? ?/sec
arrow_reader_clickbench/sync/Q36                  1.00     22.3±0.07ms        ? ?/sec              1.01     22.5±0.06ms        ? ?/sec
arrow_reader_clickbench/sync/Q37                  1.00      6.7±0.03ms        ? ?/sec              1.02      6.9±0.01ms        ? ?/sec
arrow_reader_clickbench/sync/Q38                  1.00     11.5±0.03ms        ? ?/sec              1.00     11.5±0.02ms        ? ?/sec
arrow_reader_clickbench/sync/Q39                  1.00     21.0±0.05ms        ? ?/sec              1.01     21.3±0.04ms        ? ?/sec
arrow_reader_clickbench/sync/Q40                  1.00      5.0±0.02ms        ? ?/sec              1.08      5.4±0.02ms        ? ?/sec
arrow_reader_clickbench/sync/Q41                  1.00      5.5±0.03ms        ? ?/sec              1.03      5.6±0.03ms        ? ?/sec
arrow_reader_clickbench/sync/Q42                  1.00      4.3±0.02ms        ? ?/sec              1.01      4.3±0.02ms        ? ?/sec

Resource Usage

base (merge-base)

Metric Value
Wall time 785.2s
Peak memory 4.6 GiB
Avg memory 4.5 GiB
CPU user 699.5s
CPU sys 82.5s
Peak spill 0 B

branch

Metric Value
Wall time 780.2s
Peak memory 4.8 GiB
Avg memory 4.7 GiB
CPU user 710.1s
CPU sys 67.7s
Peak spill 0 B

File an issue against this benchmark runner

selected_row_count: usize,
) -> ArrayRef {
match selected_batches[0].array.data_type() {
ArrowType::Utf8 => {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't it need utf8view to show up in the clickbench benchmarks?

@XiangpengHao
Copy link
Copy Markdown
Contributor

Thank you @ClSlaid The idea is to avoid materialize filtered batch and directly build the final batch, so essentially a fused filter-and-concate kernel. It makes sense to me.
However, I'm not sure if BatchCoalescer already does this and we probably should just use that? And/or the optimization should probably be there instead of here?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

parquet Changes to the parquet crate performance

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[parquet] reduce the time spent in CachedArrayReader

5 participants