-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[bdp] Maximum value limits throughput on high-latency connections #2400
Comments
@euroelessar the 4MB window size limit is set based on standard TCP settings -- most TCP connections can't effectively utilize any larger of a window size. What is the discrepancy between our out-of-the-box 40MB/s and what you're seeing with iperf or when you set |
Currently the difference is 40MBps vs 100MBps for cross-zone traffic (≈150% delta), and 300MBps vs 450MBps for traffic within (≈50% delta). But within a metro we're likely hitting CPU utilization bottleneck at 450MBps, as Regarding
|
Looks like 4MB is definitely too conservative of a default value - we are open to increasing this, or possibly reading the TCP settings and mirroring the receive window size when setting our flow control. @euroelessar could you do a |
Sure,
|
What version of gRPC are you using?
grpc-go 1.14 (but present on master as well)
What did you do?
Use server-side streaming (single response stream over single connection) and measure throughput.
The latency between two hosts is 100ms.
What did you expect to see?
Performance is limited by network stack or CPU, and is on par with
iperf
.What did you see instead?
Performance is limited by 40MB/s. Both CPU and network are under-utilized.
gRPC network stack quickly reaches limit of 4MB window size and doesn't scale it further.
Manually specifying both
InitialWindowSize
andInitialConnWindowSize
, or modifyinggrpc-go
codebase to increasebdpLimit
resolves the performance issue as well, but it doesn't scale well for busy or otherwise low-throughput network.The text was updated successfully, but these errors were encountered: