Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
[bdp] Maximum value limits throughput on high-latency connections #2400
What version of gRPC are you using?
grpc-go 1.14 (but present on master as well)
What did you do?
Use server-side streaming (single response stream over single connection) and measure throughput.
What did you expect to see?
Performance is limited by network stack or CPU, and is on par with
What did you see instead?
Performance is limited by 40MB/s. Both CPU and network are under-utilized.
gRPC network stack quickly reaches limit of 4MB window size and doesn't scale it further.
Manually specifying both
@euroelessar the 4MB window size limit is set based on standard TCP settings -- most TCP connections can't effectively utilize any larger of a window size. What is the discrepancy between our out-of-the-box 40MB/s and what you're seeing with iperf or when you set
Currently the difference is 40MBps vs 100MBps for cross-zone traffic (≈150% delta), and 300MBps vs 450MBps for traffic within (≈50% delta). But within a metro we're likely hitting CPU utilization bottleneck at 450MBps, as
Looks like 4MB is definitely too conservative of a default value - we are open to increasing this, or possibly reading the TCP settings and mirroring the receive window size when setting our flow control.
@euroelessar could you do a