Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Skip zero-ing primitive nulls #1280

Merged
merged 1 commit into from
Feb 8, 2022
Merged

Conversation

tustvold
Copy link
Contributor

@tustvold tustvold commented Feb 7, 2022

Which issue does this PR close?

Closes #1279.

Rationale for this change

A marginal performance increase, and to gauge the appetite for these sorts of optimizations. In particular the obvious next question is, do we need to be zero-initializing the buffers to start with? I'm not entirely sure what Rust's safety rules state about uninitialized memory of POD types, are there invalid bit sequences for some types (e.g. f32), if not, why does it matter?

arrow_array_reader/read Int32Array, plain encoded, optional, half NULLs                                                                             
                        time:   [24.527 us 24.556 us 24.588 us]
                        change: [-6.4746% -5.3769% -4.4004%] (p = 0.00 < 0.05)
                        Performance has improved.
arrow_array_reader/read Int32Array, dictionary encoded, optional, half NULLs                                                                             
                        time:   [32.846 us 32.858 us 32.875 us]
                        change: [-9.1480% -8.3967% -7.9605%] (p = 0.00 < 0.05)
                        Performance has improved.

What changes are included in this PR?

Alters ScalarBuffer::pad_nulls to not zero the source location on read.

Are there any user-facing changes?

Buffers that previously contained zeros in null positions, will now contain arbitrary, but guaranteed to be valid, values

Edit: we must zero-initialize memory in general at least for floats to avoid accidentally including a signalling NaN

@github-actions github-actions bot added the parquet Changes to the parquet crate label Feb 7, 2022
Copy link
Member

@sunchao sunchao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

I think we can completely avoid pad_nulls if we interleave definition level decoding and value decoding. This is what Spark does at the moment.

@tustvold
Copy link
Contributor Author

tustvold commented Feb 8, 2022

This is what Spark does at the moment.

Tbh I'm surprised that is more performant, if it even is, as you're putting a branch inside the main body of the loop... Something to measure for sure 😄

@sunchao
Copy link
Member

sunchao commented Feb 8, 2022

Well, it's the same branch for decoding levels though, it's just that we don't have to store all the decoded definition levels in a vector first, and then start decoding values.

Imagine we have 100 values to decode: first 50 is null and the rest are not-null (therefore max definition level = 1), we can:

  1. read the next RLE-encoded value from definition level buffer, which is 0
  2. read 50 values from the value buffer using the batch API, and also set validity bits
  3. read the next RLE-encoded value from definition level buffer, which is 1
  4. (assuming validity buffer is zero-initialized) increment the offset by 50

Copy link
Contributor

@alamb alamb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @tustvold and @sunchao -- arrow 10 will be even better than arrow 9!

@alamb alamb merged commit e44b59e into apache:master Feb 8, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
parquet Changes to the parquet crate performance
Projects
None yet
Development

Successfully merging this pull request may close these issues.

ScalarBuffer::pad_nulls Zeros Out Padded Data Unnecessarily
3 participants