-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
client: detect max_concurrent_streams coming from the server #6576
Comments
Yes, same answer... We do have a sketch of a design to incorporate the max streams information in gRPC to create new connections when it's blocked on max streams, but it's not a high priority for us right now. What is your use case / why do you need this information? |
My use case is that the server sets a value, and requests can queue up on client side when traffic volume is high enough. We want to have a connection pool where each connection doesn't have more concurrent streams than what server sets. |
That sounds similar to the situation that prompted the design mentioned above. Can you give us any more information on your system / constraints? E.g.s:
Thanks! |
Yes
Third party.
It seems to be static at 128, but I am not very certain about that.
Yes
No
We will keep track of the concurrent streams on a given connection so that it won't reach the limit. If it reaches the limit, we create new connections to the same address. |
This issue is labeled as requiring an update from the reporter, and no update has been received after 6 days. If no update is provided in the next 7 days, this issue will be automatically closed. |
|
Thank you for the info. It seems like your use case would be satisfied by design I was alluding to, and it does come up periodically, but unfortunately it is not currently a priority for the team. Let's fold this under grpc/grpc#21386 as it would require a cross-language effort to implement. FWIW there's an implementation in google's cloud client libraries that can do connection pooling, which may be of interest to you: |
Pretty much the same question as in #3127, is it still not possible today? Tagging @dfawley for historical context.
The text was updated successfully, but these errors were encountered: