Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gunzip: improve EOF handling #40

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

cyphar
Copy link

@cyphar cyphar commented Feb 19, 2021

This fixes #38 and #39, though I'm not entirely sure if you're happy with this approach.

To solve #39, we switch from using a channel for the block pool and instead use a sync.Pool. This does have the downside that the read-ahead goroutine can now end up allocating more blocks than the user requested. If this is not acceptable I can try to figure out a different solution for this problem. By using sync.Pool, there is no issue of blocking on a goroutine channel send when there are no other threads reading from it.

To solve #38, some extra io.EOF special casing was needed in both WriteTo and Read. I think that these changes are reasonable -- it seems as though z.err should never store io.EOF (and there were only a few cases where it would -- which I've now fixed), but let me know what you think.

Fixes #38
Fixes #39

Signed-off-by: Aleksa Sarai cyphar@cyphar.com

This matches NewWriter, and reduces the possibility of forgetting to
update one of the initialisation functions when making changes.

Signed-off-by: Aleksa Sarai <cyphar@cyphar.com>
This stops us from causing goroutine deadlocks when we re-add a buffer
after the stream has been read and the read-ahead goroutine is dead.
Note that with this change, it is possible for more blocks to be
allocated than the user requested.

Signed-off-by: Aleksa Sarai <cyphar@cyphar.com>
WriteTo should not return io.EOF because it is assumed to have io.Copy
semantics (namely, on success you return no error -- even if there were
no bytes copied). Several parts of WriteTo would return io.EOF -- all of
which need to be switched to special-case io.EOF.

In addition, Read would save io.EOF in z.err in some specific corner
cases -- these appear to be oversights and thus are fixed to not store
io.EOF in z.err.

Signed-off-by: Aleksa Sarai <cyphar@cyphar.com>
Before this patchset, these tests would either lock up or fail with
spurrious EOF errors.

Signed-off-by: Aleksa Sarai <cyphar@cyphar.com>
@klauspost
Copy link
Owner

@cyphar Thank you for looking through my ancient code. I remember it being quite painful when I looked through it a year ago.

Copy link
Owner

@klauspost klauspost left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@klauspost
Copy link
Owner

This does have the downside that the read-ahead goroutine can now end up allocating more blocks than the user requested

@cyphar It this without limit? There must be a limit to how far it can read ahead, otherwise you will easily run the system out of memory.

@cyphar
Copy link
Author

cyphar commented Feb 19, 2021

@cyphar It this without limit? There must be a limit to how far it can read ahead, otherwise you will easily run the system out of memory.

Yeah it's without limit -- the issue is that sync.Pool doesn't have a way of limiting the size of the pool -- so it is possible you'd end up with an endlessly growing number of blocks. I will have to come up with another way of fixing the deadlock issue -- I think the issue with the channel approach taken is that the channel is getting full for some reason. I'll take another look.

@klauspost
Copy link
Owner

@cyphar It would be great if you could look at that. Unbounded decompression is an issue. Thanks!

@rubenv
Copy link

rubenv commented May 11, 2022

Got bitten by this, hard. @klauspost unbounded decompression is an issue, but currently the results are just wrong, which may lead to data loss.

Could we consider at least getting it to return data? I'd rather see my server blow up than lose data.

@cyphar
Copy link
Author

cyphar commented May 11, 2022

I'll take another look at how to fix the deadlock issue without switching to sync.Pool...

@klauspost
Copy link
Owner

Details escape me, but I would think that persisting errors, so future calls are rejected should be enough.

@rubenv
Copy link

rubenv commented May 11, 2022

Problem is that if you do an io.Copy() on a reader that has buffered fully, you won't get an error. It'll just, silently, copy 0 bytes and call it done.

I'd be happy with an error, even if it was just "TODO". But not silently eating data, that's dangerous.

@cyphar
Copy link
Author

cyphar commented May 11, 2022

@klauspost The io.EOF handling is fixed in a separate commit (gunzip: handle io.EOF errors correctly in WriteTo), but the unit tests require the deadlocks to be fixed.

@rubenv

Problem is that if you do an io.Copy() on a reader that has buffered fully, you won't get an error. It'll just, silently, copy 0 bytes and call it done.

io.Copy()'s semantics are that it returns nil when the end of the file is reached, so you won't get io.EOF if there's no data to read. I guess the issue is that the compressor hasn't generated all the data? Is this issue also fixed by this PR? Do you have a test case I can add?

@klauspost
Copy link
Owner

The sync.Pool change doesn't work for us here and will cause serious regressions. As you proposed initially, please consider a solution that retains the original pool.

You are welcome to submit separate PRs, if that will simplify code review.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants