New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
net: not possible to set TCP backlog parameter for listener #39000
Comments
The discussion which led to creation of this issue. I'd try to rephrase the issue a bit so it becomes supposedly more understandable. To figure out the size of the TCP listening backlog, the Go library code reads the relevant system setting once. What @nemirst suggests is to have a way to explicitly specify the backlog size parameter when creating a listening TCP socket — something akin to what has been implemented in #9661 for tweaking options of already created sockets. |
In general we try to pass the maximum acceptable value as the From the docs (https://docs.microsoft.com/en-us/windows/win32/api/winsock2/nf-winsock2-listen), it looks like using |
Yes, that would fix my problem completely without any real disadvantages for my case. I'm only not sure if that is appropriate for some other case, like:
For me little bit higher memory usage is not important and also SYN flood attacks are not possible as server will not be public. |
ISTM that it's not possible to use the ListenConfig.Control function to achieve this. Maybe we can add a backlog size knob to net.ListenConfig, then? If the knob is set (!= 0), we use that value. If it's not, we use the value we already use. |
@acln0 I agree that we could add a knob to |
@ianlancetaylor You may be right. I don't know about useless, but it's at least a little strange. I implemented the knob I was talking about in package net, and then tried to write a test for it. The test tried to do something like this, on Linux:
Naively, I expected this to honor the backlog of 1. It didn't: both dials succeeded immediately. I don't know what this means, and I haven't had time to dig into the kernel sources in order to understand the behavior. Maybe Linux ignores backlog values that are this small. On the other hand, leaving Linux aside for the moment, I am reading https://docs.microsoft.com/en-us/archive/blogs/winsdk/winsocks-listen-backlog-offers-more-flexibility-in-windows-8, which seems reasonably official, and mentions backlog parameters as low as 2 - 4, so maybe a test like that one would pass on Windows, and would also capture the essence of this issue (and make the knob worth doing). |
If you think it would help, I could mail a CL of my implementation and the test (with DO NOT SUBMIT, etc.) |
Change https://golang.org/cl/233577 mentions this issue: |
I mailed it in the end. The relevant test is TestListenConfigBacklog, in listen_windows_test.go. |
What version of Go are you using (
go version
)?Does this issue reproduce with the latest release?
Yes
What operating system and processor architecture are you using (
go env
)?go env
OutputWhat did you do?
I'm crossbuilding from Linux to Windows (but that is not important here). When 500 clients connect simultaneously to my Go program (TCP server) it fails to accept them all because backlog queue gets filled. I process them as fast as possible. As a workaround, on Linux I fixed this by setting
net.core.somaxconn
kernel parameter. But on Windows server (at least on Microsoft Windows Server 2019 Datacenter) I didn't find similar OS based parameter/solution. For Windows I had to rebuild Go from source after changing net.maxListenerBacklog to return custom backlog value (for me SOMAXCONN_HINT(500)=-500 worked): https://github.com/golang/go/blob/master/src/net/sock_windows.go#L16. There are other workarounds probably but this was done just to verify if setting this parameter fixes the problem.What did you expect to see?
Some kind of way to set backlog parameter before listener starts to accept connections so that clients don't disconnect just because backlog is full in bursty scenarios.
What did you see instead?
Clients disconnected when backlog was full (size is 200 on my system).
The text was updated successfully, but these errors were encountered: