-
Notifications
You must be signed in to change notification settings - Fork 565
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SETTINGS_MAX_CONCURRENT_STREAMS #38
Comments
Hey Brian, Could you please bring these up on the list? Otherwise they'll hide in the issues list. Also, in the future, if things are separable as different issues, please make them so. E.g., here, I would have liked to see at least two issues; one for the minimum number of client-initiated streams, the other for defaulting to no-push. Thanks! |
Hey Mark, I talked to Brian. Let's dedicate this issue #38 to "minimum number of client-initiated streams". And I'll create a new issue for focusing on "defaulting to no-push". And I'll bring both up to the mailing list. Thanks, |
For "minimum number of client-initiated streams", there is no requirement at this time to support this scenario and complicate the protocol. Closing. |
we are hitting this as a server is advertising max=1 AFTER we already sent 100 requests out the door. I would love to see an initial value of 8 be required and at least allow the first 8 requests. this was hitting an apple api |
ie. we sent setting frame and immediately started sending as we do 80,000 rps btw and we get the server setting frame after that. we do not like to wait the full handshake nor should we have to. |
This is a server problem. Any server that sets a limit of 1 is either missing the point of the protocol, or is under stress. The server is - of course - entitled to set any limit that it wants, but it does have to gracefully handle the client exceeding its limit initially. Note that once we deploy TLS 1.3 the server settings can be sent to the client before the client starts sending. You might find that TCP won't keep up with 80k requests/second anyway. If you are sending that sort of rate, it might pay to talk to the server operator about what you can do. There are plenty of good options availble. |
Accept Limit vs Initial Limit
Currently an endpoint advertises what it is capable of accepting:
• When a client sends SETTINGS_MAX_CONCURRENT_STREAMS =123 it is saying that it will accept up to 123 concurrent pushed streams.
• When a server sends SETTINGS_MAX_CONCURRENT_STREAMS =123 it is saying that it will accept up to 123 concurrent HTTP request streams.
Are there scenarios for an endpoint to advertise what it is capable of issuing? For example, is it useful for a server to know that a client will issue at most 123 concurrent HTTP request streams? Or is it useful for a client to know that a server will issue at most 123 concurrent push streams? If the answer is "no", then we can avoid complicating the protocol.
Limit Values
There is a race condition where the client can issue more streams to the server before the server can advertise its accept limit to the client. Note that a race condition in the reverse path is not possible because a client must issue a SYN_STREAM before the server can push anything, which means it can definitely send the initial SETTINGS frame before emitting the first SYN_STREAM. (And in the future, it will be mandatory for the client to send the SETTINGS frame upon connection) Furthermore, there is no clear rationale for the value of SETTINGS_MAX_CONCURRENT_STREAMS to “be no smaller than 100”. To offer clearer requirements, the following is suggested:
A server MUST be able to handle at least 8 concurrent streams initiated by the client. A server MUST NOT advertise a value less than 8. A client MUST generate a session error if it receives a value less than 8 from the server. The default value emitted by servers is 8. The default value emitted by clients is 0. It is recommended that servers pick a much larger value to allow parallelism.
This ensures that there is a minimum value so that we don’t fall into a race hole but is large enough so that client is not bottlenecked on RTT for the initial requests. A default client-side of 0 means the communication defaults to no-push. That is, a smart client has to proactively advertise a non-zero value for the server to enable push.
The text was updated successfully, but these errors were encountered: