Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

panic on Writer.Close, runtime error: index out of range #15

Closed
toffaletti opened this issue Nov 17, 2015 · 10 comments
Closed

panic on Writer.Close, runtime error: index out of range #15

toffaletti opened this issue Nov 17, 2015 · 10 comments

Comments

@toffaletti
Copy link

Unfortunately I don't have a test to reproduce this, but over hundreds of thousands of calls we started seeing a few of these panics:

runtime error: index out of range
goroutine 148112966 [running]:
net/http.func·011()
        /usr/lib/go/src/net/http/server.go:1130 +0xbb
github.com/klauspost/compress/flate.(*compressor).deflateNoSkip(0xc20e431600)
        _vendor/src/github.com/klauspost/compress/flate/deflate.go:594 +0xc68
github.com/klauspost/compress/flate.(*compressor).close(0xc20e431600, 0x0, 0x0)
        _vendor/src/github.com/klauspost/compress/flate/deflate.go:773 +0x49
github.com/klauspost/compress/flate.(*Writer).Close(0xc20e431600, 0x0, 0x0)
        _vendor/src/github.com/klauspost/compress/flate/deflate.go:854 

We never saw this before I updated our snapshot on Nov 2nd. The previous update was Sep 8th.

@klauspost
Copy link
Owner

Thanks for the report - I will investigate at once!

@klauspost
Copy link
Owner

Can you confirm that deflate.go:594 is also https://github.com/klauspost/compress/blob/master/flate/deflate.go#L594 ?

@toffaletti
Copy link
Author

Yes, and unfortunately there are two indexes used on that line. I'm not sure which is out of range.

@shishkander
Copy link

@maruel has found this bug too https://code.google.com/p/chromium/issues/detail?id=552697#c4 .
The unittest in our code is a bit large but can reproduce this.

@klauspost
Copy link
Owner

From the other report it is clear that it is d.tokens.n that somehow gets too high. I am trying to recreate a scenario where that happens. Unfortunately my 500k files "webserver" test does not reproduce it.

I am of course continuing work on this.

@klauspost
Copy link
Owner

OK. Are you checking returned error values?

If you call Write after a write has already failed (in case of the chrominum bug) I can see this error occurring.

Close after a failed write will do the same.

Both are of course bugs, and will be fixed - I just want to be sure I haven't overlooked anything.

@maruel
Copy link

maruel commented Nov 17, 2015

We reproduce in this case:

  • Compress, writing to an io.Pipe().
  • Using io.Copy() to write the input data to the compressor.
  • Length of data is >= 64kb + 1.
  • The read pipe from io.Pipe() is closed in the middle of the stream.

I was planning try to write a regression test before even bothering you about this but too late, eh. :)

@maruel
Copy link

maruel commented Nov 17, 2015

@toffaletti (shameless) plug for panicparse to make your stack traces shorter.

klauspost added a commit that referenced this issue Nov 17, 2015
Errors from the underlying writer was not being forwarded/returned, so that is now done, and tests have been added to test that functionality

Fixes #15
@klauspost
Copy link
Owner

Thanks for helping identify this issue. I have merged the fix after testing.

@toffaletti
Copy link
Author

In our code I don't control whether Write is called again after an error because our write is essentially json.NewEncoder(gzipWriter).Encode(). I will open a new issue if this fix didn't clear up the panic. Thanks so much for the prompt response despite so few details.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants