New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Incorrect connection error on implicit stream close from window update #312
Comments
Hm, the part that you cite doesn't seem to mention the issue you've described, right? You're receiving a |
In the case we've encountered, we've seen the stream ID before, since we've just closed the stream (we just sent a few megabytes of data upstream). However, regardless, we should never set a connection error based off of a window update frame, right? |
Note: https://github.com/carllerche/h2/blob/master/src/proto/streams/streams.rs#L382 is where we branch on whether or not the stream has already been close |
I believe this is the relevant part that explains why it's currently a connection error:
|
In this case, the stream is not in the idle state -- it's been locally closed. It might suffice in this case to not call My interpretation of the flow of events is as follows:
The question, then, is why that's triggering any error at all (since next_stream_id should be 63)... and even if it were to trigger an error, if we don't know that stream 61 is idle, we should not return a protocol error. |
I have a local repro that (roughly) amounts to muxing a bunch of nontrivial uploads on the same connection, where the underlying internet connection is slow. Example trace:
|
That does seem incorrect then. What you propose makes sense, would you be up for submitting a PR (even better if it has a unit test)? |
Yeah, I can put together a PR. Might need to do some more reading to determine how exactly we represent closed/idle states in the connection abstraction. |
Real interesting: next_stream_id in this case appears to be 2!!? |
I don't see what you mean about it being |
In this case this is a client, which is why it's confusing -- there should be no push promises involved at all in this log. In particular, I would have expected |
I've looked through the logs you pasted, but I must be missing it. I don't see |
Yeah, it's not printed out in master -- I modified it to dump Unfortunately the full log dump is in the tens of MB, so it's a bit of a pain to upload -- do you have a location where you'd prefer me to post it? Relevant subsection
where the patch is
|
Okay, if I'm understanding things correctly, this is what happens: The client makes n requests to the server. These requests get stream ids (1, 3, ... 2n+1). However, we only update We don't encounter this in the common case because (usually) we would expect to finish sending the request in full prior to receving any headers, so we'd see Now, the server is allowed to send us window updates for any of stream ids (1, 3, 2n+1), including after we have closed those streams locally (i.e. after those requests have completed and the closed streams have been dropped from the store). Now, |
I believe the bug is actually that at this line here, https://github.com/carllerche/h2/blob/master/src/proto/streams/streams.rs#L397, the wrong "side" is checked. If you look in the rest of the function, receiving a |
hm, that makes sense! Unfortunately my local repro is now failing at a different (earlier) stage, but I'll make that change and see if it makes the problem go away. Thoughts on writing an appropriate unit test? |
The server is allowed to send window updates for streams which are (locally) already closed, which is handled here:
https://github.com/carllerche/h2/blob/master/src/proto/streams/streams.rs#L398
However, in the case where the received ID is higher than
self.next_stream_id
, this causes the connection to error withReason::PROTOCOL_ERROR
, rather than implicitly closing any idle streams.According to the HTTP/2 spec,
In particular, treating it as a connection error causes all simultaneous requests to fail the next time we poll the connection, even if they would have succeeded.
The text was updated successfully, but these errors were encountered: