-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
visibility into client-side queuing and flow control #11114
Comments
We already package up that information into the DEADLINE_EXCEEDED status message. And we have infrastructure where we could add more, if something is missing. What are you trying to use this information for? Are you wanting code that reacts dynamically based on the results or are you wanting to gather more details for a human to debug? |
Where in Improving the
At the moment, debugging and understanding performance. (I suppose in the future, it's possible to explore automatically opening another channel if calls are blocked too long on |
Oh, I missed you wanted messages. There's some bits you can infer from traces, but in general talking about when a message is sent complicated.
We have support for OpenCensus tracing today, and we're adding OpenTelemetry tracing. There's also Perfmark tracing, which shows more about thread interactions and wouldn't be as good for watching a single RPC you were suspicious about after-the-fact. The current stuff is lacking MAX_CONCURRENT_STREAMS visibility, and flow control is hard to express. There has been talk (and a private design) of opening new connections when hitting MAX_CONCURRENT_STREAMS. |
I realize the most detailed efforts would require netty plumbing or forking. But what about my suggestion in the OP to notify the stream tracer in |
CC @YifeiZhuang |
I think adding a |
We talked about this in cross language meeting and we had census that have a notification when the last bytes are written to the wire sounds fair. @benjaminp do you want to send a PR for this? cc.@ejona |
Outgoing messages in a call can be internally queued by gRPC due to HTTP2 flow control. Client calls can additionally be delayed from even starting due to the underlying HTTP2 connection hitting its
MAX_CONCURRENT_STREAMS
limit. There should be a public API that can observe gRPC-internal queuing of outgoing messages. This could be useful when when debugging aDEADLINE_EXCEEDED
RPC; it's very informative to know if the request never left the client's queues.Basically, I'd like to revive #8945. To recapitulate the findings on that issue:
StreamTracer.outboundMessageSent
is called after a message is sent to the transport not when the message hits the network.onReady
callbacks are strongly correlated with messages being released onto the network. However, unary calls do not sendonReady
callbacks.One way to resolve this issue would be to change
StreamTracer.outboundMessageSent
to be called after the message is actually sent to the network. Alternatively a newStreamTracer
callback—outboundMessageTransmitted
?—could serve that purpose.The text was updated successfully, but these errors were encountered: