-
Notifications
You must be signed in to change notification settings - Fork 4k
Description
Description:
We encountered a performance issue related to syncContext.execute(() -> exitIdleMode()) when creating a gRPC stream on a not-yet-ready TCP connection.
Context:
Our business thread executes:
streamObserver = connect(); // -> internally calls ManagedChannel.newCall(...)
Under the hood, this triggers:
ManagedChannel.newCall(...)
→ syncContext.execute(() -> exitIdleMode())
→ starts NameResolver / LoadBalancer
→ LoadBalancerHelper.updateBalancingState(...) triggers updateSubchannelPicker(...)
→ DelayedClientTransport.reprocess()
If the TCP connection is not yet ready, the stream is placed in a pendingStreams list. Once the connection becomes ready, DelayedClientTransport.reprocess() will process all pending streams.
Problem:
When the number of pending streams is large, DelayedClientTransport.reprocess() becomes time-consuming and is executed on the same thread as syncContext.execute(...), i.e., the business thread that originally called connect().
This causes noticeable delays and performance degradation on our business logic path.
Question:
Is it possible to specify or offload the execution of syncContext.execute(() -> exitIdleMode()) to a dedicated thread pool or event loop, to avoid blocking the business thread?
If not, would it make sense to provide an option for decoupling such internal reprocessing logic from application threads?