Permalink
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Browse files
net: introduce SO_RCVBUFAUTO to let the rcv_buf tune automatically
Normally, user doesn't care the logic behind the kernel if they're
trying to set receive buffer via setsockopt. However, once the new
value of the receive buffer is set even though it's not smaller than
the initial value which is sysctl_tcp_rmem[1] implemented in
tcp_rcv_space_adjust(),, the server's wscale will shrink and then
lead to the bad bandwidth as intended.
For now, introducing a new socket option to let the receive buffer
grow automatically no matter what the new value is can solve
the bad bandwidth issue meanwhile it's not breaking the application
with SO_RCVBUF option set.
Here are some numbers:
$ sysctl -a | grep rmem
net.core.rmem_default = 212992
net.core.rmem_max = 40880000
net.ipv4.tcp_rmem = 4096 425984 40880000
Case 1
on the server side
# iperf -s -p 5201
on the client side
# iperf -c [client ip] -p 5201
It turns out that the bandwidth is 9.34 Gbits/sec while the wscale of
server side is 10. It's good.
Case 2
on the server side
#iperf -s -p 5201 -w 425984
on the client side
# iperf -c [client ip] -p 5201
It turns out that the bandwidth is reduced to 2.73 Gbits/sec while the
wcale is 2, even though the receive buffer is not changed at all at the
very beginning.
After this patch is applied, the bandwidth of case 2 is recovered to
9.34 Gbits/sec as expected at the cost of consuming more memory per
socket.
Signed-off-by: Jason Xing <xingwanli@kuaishou.com>
--
v2: suggested by Eric
- introduce new socket option instead of breaking the logic in SO_RCVBUF
- Adjust the title and description of this patch
link: https://lore.kernel.org/lkml/CANn89iL8vOUOH9bZaiA-cKcms+PotuKCxv7LpVx3RF0dDDSnmg@mail.gmail.com/- Loading branch information