-
Notifications
You must be signed in to change notification settings - Fork 846
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix page size on dictionary fallback #2854
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you 👍
@@ -1108,6 +1110,55 @@ mod tests { | |||
roundtrip(batch, Some(SMALL_SIZE / 2)); | |||
} | |||
|
|||
#[test] | |||
fn arrow_writer_page_size() { | |||
let mut rng = thread_rng(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should either seed this, or loosen the assert below. Otherwise I worry that depending on what values are generated, we may end up with more or less pages (as the dictionary page will only spill once it has seen sufficient different values, which technically could occur at any point)
|
||
let props = WriterProperties::builder() | ||
.set_max_row_group_size(usize::MAX) | ||
.set_data_pagesize_limit(256) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could potentially set the dictionary page size smaller to verify that as well, but up to you
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So I think there are still some issues here. It is still ignoring the size limit. It is at least respecting the write_batch_size
though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is expected and I believe consistent with other parquet writers. The limit is best effort
@@ -551,7 +551,10 @@ where | |||
|
|||
match &mut encoder.dict_encoder { | |||
Some(dict_encoder) => dict_encoder.encode(values, indices), | |||
None => encoder.fallback.encode(values, indices), | |||
None => { | |||
encoder.num_values += indices.len(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm guessing the problem was that whilst the estimated_data_page_size would increase, the lack of any values would cause it to erroneously not try to flush the page?
In particular https://github.com/apache/arrow-rs/blob/master/parquet/src/column/writer/mod.rs#L567
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep, exactly
@@ -551,7 +551,10 @@ where | |||
|
|||
match &mut encoder.dict_encoder { | |||
Some(dict_encoder) => dict_encoder.encode(values, indices), | |||
None => encoder.fallback.encode(values, indices), | |||
None => { | |||
encoder.num_values += indices.len(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we be doing this regardless of if we've fallen back? I think currently this will fail to flush a dictionary encoded data page even if it has reached sufficient size?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe, when we do it that way it causes a panic which may also be a bug.
General("Must flush data pages before flushing dictionary")'
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need to reset num_values to 0 when we flush a data page
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it already does that right?
num_values: std::mem::take(&mut self.num_values), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
Benchmark runs are scheduled for baseline = c3aac93 and contender = 0268bba. 0268bba is a master commit associated with this PR. Results will be available as each benchmark for each run completes. |
Which issue does this PR close?
Closes #2853
Rationale for this change
On fallback
ByteArrayEncoder
wasn't tracking the number of values written so when the dictionary page hits the limit and we fallback, all remaining data was written in a single data page.What changes are included in this PR?
Make sure
ByteArrayEncoder
tracks the number of encoded values after it falls back to the fallback encoderAre there any user-facing changes?
This will change the way data pages are laid out in some cases.
No
No