-
Notifications
You must be signed in to change notification settings - Fork 205
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Response to RST_STREAM is a little unclear. #1878
Comments
Isn't this covered here: More generally though - I'm not sure exactly which frames count toward flow control - e.g. what about a lost padding only packet that is never retransmitted? Or extension frames. |
No, the offset field in RST_STREAM addresses only one direction -- the correct final offset from RST_STREAM sender to RST_STREAM receiver. I spent some time trying to figure out if the spec expected me to send RST_STREAM in response to RST_STREAM, and I decided that wasn't the case even though the uni stream state machines sort of imply it. The final offset from RST_STREAM receiver to RST_STREAM sender is never explicitly communicated, as far as I can tell.
It's STREAM contents only that count. Other frames, like padding, aren't buffered by the receiver. |
@mikkelfj Flow controls applies to data sent on streams in the form of STREAM frames. It does not apply to packets or other frame types. So a lost padding frame would have no bearing on flow control. |
@RyanatGoogle thanks - I did implement it once, but it was more than a year ago. |
https://quicwg.org/base-drafts/draft-ietf-quic-transport.html#rfc.section.9.2.4 |
@mikkelfj is correct -- each direction is independent. If you receive a RST_STREAM, the data flowing toward you has ceased abruptly. This does not inherently mean that you cannot respond. If you choose not to respond, send a RST_STREAM yourself, which then contains the flow control information you're discussing. Is there text we should clarify for this? |
If RST_STREAM begets a return RST_STREAM, that's news to me. If everyone is happy with this state of affairs, I'll file a PR to make that a bit clearer in the spec. (I explicitly looked for this in the spec before filing, and couldn't find it). I suppose it's implied by the state machines and so forth, but this is different from HTTP/2 and probably could be more explicti. |
Yup, that's how it sort of needs to work, since both sides need to communicate an offset, and the two ways are via a FIN'd stream or a RST_STREAM frame. A PR to clarify that is always helpful. |
Having just re-read the section on this, I don't agree that there is a problem. The text is very clear on this in Section 4.1, which specifically addresses the interaction of stream cancellation and flow control:
To answer the question:
You have to assume ALL of it. |
Also note that RST does not require peer RST, the peer can alternatively continue sending data and/or FIN
|
Renaming this issue to reflect the real problem, though I guess I'm the only one that was confused. I'll file a small PR to scratch my itch |
Merged #1893 manually. |
For issue #1878. I was apparently the only one confused, but since this is different from HTTP/2 it may be good to make it crystal clear.
GitHub is acting out here. Fix merged manually. |
In a bidirectional stream, if I receive a RST_STREAM, how am I to know how much of the data I have sent is counting towards connection flow control?
For example, say I send 10 STREAM frames on stream 0, of 1000 B each, receive a RST_STREAM for stream 0 with offset 0, and then acks for packets 1:4 and 6:10. It is clear that I should deduct zero of the credit I am granting to the peer, but it is not clear how much connection credit the peer has deducted for this stream.
One way to solve this is to have a receive offset field in RST_STREAM. When I receive one of these, I can flush the send queue and recover any credit I had taken from myself. In this case, if the RST_STREAM had a max receive offset of 1000, I'd add 9000 back to my connection credit.
Inveterate optimizers might want this field to only exist for bidi streams. I don't particularly care.
The text was updated successfully, but these errors were encountered: