-
Notifications
You must be signed in to change notification settings - Fork 204
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Increasing initial flow control window for streams #219
Comments
All of those arguments apply more to the connection-level flow control window. The initial stream flow control window only determines the distribution of the flow control window. An endpoint could just make the number bigger from the start. There aren't many downsides to that. |
It's true that if you get your initial flow control window wrong, you'll need to burn an RTT (or use 0-RTT to open a new connection.) No further discussion for ten months argues that no one feels strongly. Do we really need this in v1? |
After thinking about this more, I can also imagine cases when you'd like to decrease the initial flow control window for new streams(ie: peer's memory becomes more constrained), but that would introduce a race condition where new streams assumed the old, larger window value. The only solution to that I can think of is not enforcing the new smaller value it has been acknowledged. It's a bit of complexity, but it seems viable, and the complexity is only present for those who wish to send smaller values. I think we're going to end up wanting a transport parameters update frame for this use case and potentially the explicit max ack delay(Issue #912) and probably a few more things down the line, so I'm leaning towards adding one now. |
A generic mechanism is much harder to design correctly. Updating transport parameters creates the problem for every parameter, forcing every parameter to deal with the transition. If the intent is to update the initial flow control window, you could add a specific mechanism that marked a certain stream ID as the point at which the new setting applied. That would be much easier to design. |
There's no guarantee a peer hasn't already created a larger stream number than the one you specify, unless you state the update can't apply to any streams smaller than the max stream id, which seems unfortunate. The approximate mechanism I'm proposing is that for parameters that want updating, they MUST take effect when the UPDATE_TRANSPORT_PARAMS frame is received and the sender MAY assume that they have taken effect as soon as the packet containing the UPDATE_TRANSPORT_PARAMS frame is acked. |
What about limits that are exceeded while the update is in flight? |
They can't be enforced until the sender receives an acknowledgement. |
And not for packets that are reordered around that acknowledgment. That's a lot of complexity. We had to deal with negative flow control credits in h2, and I'm still not certain that it is implemented correctly. You are asking for something that is incrementally more complex again. |
It's only a lot of complexity for implementations that want to decrease initial flow control windows or similar changes which would make prior behavior no longer acceptable. Some(probably most) implementations will only want to increase windows, but I think it's worth thinking about how to deal with the decrease case as well. |
It's complexity on their peers that I worry most about. I'm sure that if you want to reduce windows, then you will take on that complexity. But that's not necessarily true of everyone you talk to. |
A reduction in permissions could be seen as a new connection path, and a new path might need some parameters updated. If a new stream is created on a new path, it is known to respect the parameters of that path it would seem. So drastic non-trivial changes could possibly be handled as a migration thing. EDIT: originally wrote new connection, meant new stream. EDIT2: there are edge cases if retransmission for stream creation happens on yet another path (new path = new connection id). |
I will say that the ability to decrease a window is something I wanted in H2 and on a number of occasions have been sad that it did not make it in. The feature may very well be worth the complexity. |
On the peer side, it's easy. All newly created streams get the new window value. All previously created streams are unchanged. I think it's just a matter of copying the new value over the old value. |
This is fundamentally the same problem we dealt with in HTTP/QUIC with SETTINGS mid-connection. In order to concretely specify the time at which the changes took effect, we wound up with:
Ultimately, we decided that it was easier and sufficient to just say to create a new connection if you want to change something. You can simplify a lot of that away if you say that settings at the time of stream creation govern for the lifetime of the stream; then you just need the ACK that says what stream starts the new epoch. |
Discussed with editors. Parking until we get new information, or added urgency. |
This is very valuable for long delay links. Other wise, the short window limits interacts with congestion control and results in very slow ramp-up of the congestion window. |
Tokyo conclusion: it might be nice to have a facility that allowed a client to send updated transport parameters under encryption, and the flow control characteristics are not optimal, but we don't see a strong reason to do this. Flipping to quicv2 and maybe considering this as a protocol extension. |
It may be desirable for the client to increase the flow control window mid-connection (it observes that client actually has very high BDP; it decides based on application request by peer that it will send a lot of data; it decides that it trusts the peer for some reason; potentially other options). Currently, we can bump connection flow control window without an issue, but any new individual streams are stuck with whatever was initially negotiated. We may want to add a mechanism to increase initial stream flow control window mid-connection.
The text was updated successfully, but these errors were encountered: