Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

parquet: improve BOOLEAN writing logic and report error on encoding fail #443

Merged
merged 5 commits into from Jun 16, 2021

Conversation

garyanaplan
Copy link
Contributor

Which issue does this PR close?

Closes #349 .

Rationale for this change

When writing BOOLEAN data, writing more than 2048 rows of data will
overflow the hard-coded 256 buffer set for the bit-writer in the
PlainEncoder. Once this occurs, further attempts to write to the encoder
fail, because capacity is exceeded but the errors are silently ignored.

This fix improves the error detection and reporting at the point of
encoding and modifies the logic for bit_writing (BOOLEANS). The
bit_writer is initially allocated 256 bytes (as at present), then each
time the capacity is exceeded the capacity is incremented by another
256 bytes.

This certainly resolves the current problem, but it's not exactly a
great fix because the capacity of the bit_writer could now grow
substantially.

Other data types seem to have a more sophisticated mechanism for writing
data which doesn't involve growing or having a fixed size buffer. It
would be desirable to make the BOOLEAN type use this same mechanism if
possible, but that level of change is more intrusive and probably
requires greater knowledge of the implementation than I possess.

What changes are included in this PR?

(see above)

Are there any user-facing changes?

No, although they may encounter the encoding error now which was silently ignored previously.

When writing BOOLEAN data, writing more than 2048 rows of data will
overflow the hard-coded 256 buffer set for the bit-writer in the
PlainEncoder. Once this occurs, further attempts to write to the encoder
fail, becuase capacity is exceeded, but the errors are silently ignored.

This fix improves the error detection and reporting at the point of
encoding and modifies the logic for bit_writing (BOOLEANS). The
bit_writer is initially allocated 256 bytes (as at present), then each
time the capacity is exceeded the capacity is incremented by another
256 bytes.

This certainly resolves the current problem, but it's not exactly a
great fix because the capacity of the bit_writer could now grow
substantially.

Other data types seem to have a more sophisticated mechanism for writing
data which doesn't involve growing or having a fixed size buffer. It
would be desirable to make the BOOLEAN type use this same mechanism if
possible, but that level of change is more intrusive and probably
requires greater knowledge of the implementation than I possess.

resolves: apache#349
Comment on lines 158 to 162
if self.bw_bytes_written + values.len() >= self.bit_writer.capacity() {
self.bit_writer.extend(256);
}
T::T::encode(values, &mut self.buffer, &mut self.bit_writer)?;
self.bw_bytes_written += values.len();
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm going to add a comment myself! :-)

I just realised that I only want to do this checking if the encoding is for a Boolean, otherwise it's wasted work/memory. I'll think of the best way to achieve that.

Tacky, but I can't think of better way to do this without
specialization.
Remove the byte tracking from the PlainEncoder and use the existing
bytes_written() method in BitWriter.

This is neater.
@garyanaplan
Copy link
Contributor Author

Ok. I'm finished poking this now. I've isolated the changes required to 2 files and eliminated the original runtime impact from the PlainEncoder.

@garyanaplan garyanaplan changed the title improve BOOLEAN writing logic and report error on encoding fail parquet: improve BOOLEAN writing logic and report error on encoding fail Jun 10, 2021
@alamb alamb requested a review from sunchao June 12, 2021 12:46
@alamb
Copy link
Contributor

alamb commented Jun 12, 2021

Thanks for the contribution @garyanaplan ! I will try and review this carefully tomorrow.

@alamb alamb requested a review from nevi-me June 12, 2021 12:48
@alamb alamb added bug parquet Changes to the parquet crate labels Jun 12, 2021
@codecov-commenter
Copy link

codecov-commenter commented Jun 12, 2021

Codecov Report

Merging #443 (da8c665) into master (0c00776) will decrease coverage by 0.05%.
The diff coverage is 93.87%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #443      +/-   ##
==========================================
- Coverage   82.71%   82.65%   -0.06%     
==========================================
  Files         163      165       +2     
  Lines       44795    45556     +761     
==========================================
+ Hits        37051    37655     +604     
- Misses       7744     7901     +157     
Impacted Files Coverage Δ
parquet/src/data_type.rs 77.64% <60.00%> (-0.27%) ⬇️
parquet/tests/boolean_writer.rs 97.36% <97.36%> (ø)
parquet/src/util/bit_util.rs 93.14% <100.00%> (+0.07%) ⬆️
arrow/src/compute/kernels/partition.rs 97.50% <0.00%> (-1.70%) ⬇️
arrow/src/compute/kernels/sort.rs 94.11% <0.00%> (-0.86%) ⬇️
parquet/src/util/test_common/page_util.rs 91.00% <0.00%> (-0.67%) ⬇️
parquet/src/arrow/record_reader.rs 93.44% <0.00%> (-0.54%) ⬇️
parquet/src/column/reader.rs 74.36% <0.00%> (-0.38%) ⬇️
arrow/src/array/array_dictionary.rs 84.56% <0.00%> (-0.32%) ⬇️
arrow/src/array/transform/mod.rs 86.06% <0.00%> (-0.09%) ⬇️
... and 17 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 0c00776...da8c665. Read the comment docs.

Copy link
Contributor

@alamb alamb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you again @garyanaplan. I reviewed this logic carefully and it seems reasonable to me. I think it would be good if someone more familiar with this code (@sunchao or @nevi-me could also look at the approach).

Is there any way to provide a test for this code (aka the reproducer from https://github.com/apache/arrow-rs/issues/349>?

As the core issue appears to be that the return value if put_value wasn't being checked, I wondered if there were more places where the return value isn't checked, and it seems there may be:

/Users/alamb/Software/arrow-rs/parquet/src/data_type.rs
668:                 if !bit_writer.put_value(*value as u64, 1) {
/Users/alamb/Software/arrow-rs/parquet/src/encodings/encoding.rs
602:                 self.bit_writer.put_value(packed_value, bit_width);
607:                 self.bit_writer.put_value(0, bit_width);
/Users/alamb/Software/arrow-rs/parquet/src/encodings/levels.rs
118:                     if !encoder.put_value(*value as u64, bit_width as usize) {
/Users/alamb/Software/arrow-rs/parquet/src/encodings/rle.rs
265:                 .put_value(self.buffered_values[i], self.bit_width as usize);

for value in values {
bit_writer.put_value(*value as u64, 1);
if !bit_writer.put_value(*value as u64, 1) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since put_value returns false if there isn't enough space, you might be able to avoid errors with something like:

for value in values {
  if !bit_writer.put_value(*value as u64, 1) {
    bit_writer.extend(256)
    bit_writer.put_value(*value as u64, 1)
  }
}

Rather than returning an error

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, we can either do this or make sure up front that there's enough capacity to write. One minor concern is putting the if branch inside the for loop might hurt the performance.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found it hard to think of a good way to test this with the fix in place.

I preferred the "don't auto expand memory at the point of failure" approach because I'm fairly conservative and didn't want to make a change that was too wide in impact without a better understanding of the code. i.e.: my fix specifically targeted the error I reported and made it possible to detect in other locations.

I think a better fix would be to (somehow) pre-size the vector or avoid having to size a vector for all the bytes that could be written, but that would be a much bigger scope to the fix.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

leaving the code as is seems fine to me

Copy link
Member

@sunchao sunchao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch @garyanaplan!

@@ -661,8 +661,15 @@ pub(crate) mod private {
_: &mut W,
bit_writer: &mut BitWriter,
) -> Result<()> {
if bit_writer.bytes_written() + values.len() >= bit_writer.capacity() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems here values.len is the number of bits to be written? should we use values.len() / 8?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this calculation is entirely in terms of bytes, so units should all be correct as is.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm I'm sorry but can you elaborate why the unit of values is also byte?

for value in values {
bit_writer.put_value(*value as u64, 1);
if !bit_writer.put_value(*value as u64, 1) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, we can either do this or make sure up front that there's enough capacity to write. One minor concern is putting the if branch inside the for loop might hurt the performance.

@alamb
Copy link
Contributor

alamb commented Jun 14, 2021

@garyanaplan what would you say to using the reproducer from #349 to test this issue? I realize like it probably seems unnecessary for such a small code change, but the amount of effort that went into the reproducer was significant and I would hate to have some future optimization reintroduce the bug again.

If you don't have time, I can try to make time to create the test

@garyanaplan
Copy link
Contributor Author

The problem with writing an effective test is that the error was only detected on file read and the read behaviour was to hang indefinitely. Taken together, those characteristics of the problem make crafting an effective test difficult.

To be effective a test would need to write > 2048 boolean values to a test file, then read that file and not hang. I can think of ways to do that with a timeout and assume that if the read doesn't finish within timeout, then it must have failed. Such a test would rely on multi-threaded or async testing for co-ordination. I don't think there's any async stuff in parquet yet, so multi-threaded test would be required.

I'll knock something up and push it to this branch.

@alamb
Copy link
Contributor

alamb commented Jun 14, 2021

I can think of ways to do that with a timeout and assume that if the read doesn't finish within timeout, then it must have failed.

@garyanaplan I don't think we need to do anything special for timeouts -- between the default rust test runner and github ci action timeouts any test that hangs indefinitely will cause a failure (not run successfully)

The test ensures that we can write > 2048 rows to a parquet file and
that when we read the data back, it finishes without hanging (defined as
taking < 5 seconds).

If we don't want that extra complexity, we could remove the
thread/channel stuff and just try to read the file and let the test
runner terminate hanging tests.
@garyanaplan
Copy link
Contributor Author

I'd already written the test, just been in meetings. If we'd rather rely on the test framework to terminate hanging tests, just remove the thread/mpsc/channel stuff and do a straight read after verifying the write looks ok.

println!("finished reading");
if let Ok(()) = sender.send(true) {}
});
assert_ne!(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could also check assert_eq!(Ok(true), receiver.recv_timeout(Duration::from_millis(5000)) as well

However, I think that is equivalent to what you have here. 👍 thank you

@alamb
Copy link
Contributor

alamb commented Jun 14, 2021

I'd already written the test, just been in meetings. If we'd rather rely on the test framework to terminate hanging tests, just remove the thread/mpsc/channel stuff and do a straight read after verifying the write looks ok.

Either way is fine with me. Thank you @garyanaplan

The values.len() reports the number of values to be encoded and so must
be divided by 8 (bits in a bytes) to determine the effect on the byte
capacity of the bit_writer.
@alamb alamb merged commit 9f56afb into apache:master Jun 16, 2021
@garyanaplan garyanaplan deleted the parquet-boolean-write-fix branch June 17, 2021 07:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug parquet Changes to the parquet crate
Projects
None yet
Development

Successfully merging this pull request may close these issues.

parquet reading hangs when row_group contains more than 2048 rows of data
4 participants