-
Notifications
You must be signed in to change notification settings - Fork 17.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
net: not possible to set TCP backlog parameter for listener #39000
Comments
The discussion which led to creation of this issue. I'd try to rephrase the issue a bit so it becomes supposedly more understandable. To figure out the size of the TCP listening backlog, the Go library code reads the relevant system setting once. What @nemirst suggests is to have a way to explicitly specify the backlog size parameter when creating a listening TCP socket — something akin to what has been implemented in #9661 for tweaking options of already created sockets. |
In general we try to pass the maximum acceptable value as the From the docs (https://docs.microsoft.com/en-us/windows/win32/api/winsock2/nf-winsock2-listen), it looks like using |
Yes, that would fix my problem completely without any real disadvantages for my case. I'm only not sure if that is appropriate for some other case, like:
For me little bit higher memory usage is not important and also SYN flood attacks are not possible as server will not be public. |
ISTM that it's not possible to use the ListenConfig.Control function to achieve this. Maybe we can add a backlog size knob to net.ListenConfig, then? If the knob is set (!= 0), we use that value. If it's not, we use the value we already use. |
@acln0 I agree that we could add a knob to |
@ianlancetaylor You may be right. I don't know about useless, but it's at least a little strange. I implemented the knob I was talking about in package net, and then tried to write a test for it. The test tried to do something like this, on Linux:
Naively, I expected this to honor the backlog of 1. It didn't: both dials succeeded immediately. I don't know what this means, and I haven't had time to dig into the kernel sources in order to understand the behavior. Maybe Linux ignores backlog values that are this small. On the other hand, leaving Linux aside for the moment, I am reading https://docs.microsoft.com/en-us/archive/blogs/winsdk/winsocks-listen-backlog-offers-more-flexibility-in-windows-8, which seems reasonably official, and mentions backlog parameters as low as 2 - 4, so maybe a test like that one would pass on Windows, and would also capture the essence of this issue (and make the knob worth doing). |
If you think it would help, I could mail a CL of my implementation and the test (with DO NOT SUBMIT, etc.) |
Change https://golang.org/cl/233577 mentions this issue: |
I mailed it in the end. The relevant test is TestListenConfigBacklog, in listen_windows_test.go. |
This change never got merged unfortunately. |
is this still the state of affairs? It seems ListenConfig Control should be able to change the raw socket... but you can't? (on unix) |
Adding a new parameter to |
I was a bit more curious about understanding why ListenConfig.Control can't be used at all to tweak the backlog (though a more direct api proposal sounds good too) |
|
Thanks a lot for the reply and explanation, that gave me an idea for a workaround that at least on linux seems to work, |
In case it helps someone, pending the feature, wrote this (hack): // backlog.Set() will attempt to reset the TCP connection backlog queue to the given value.
// Works on MacOS and Linux. On linux seems to allow n+1 connections to be queued (one in
// userland 1 in kernel maybe? despite no accept).
// To test
// - run the server
// go run sampleServer.go -b 3
// - run the client with -n 5 only first 3 (or 4) will connect
// go run sampleClient.go -n 5
//
// On linux `ss -ltn6` will show the backlog as SendQ column.
//
// PS: none of this meant to work according to POSIX, it just happens to seem to do so.\
// pending https://github.com/golang/go/issues/39000
func Set(l net.Listener, backlog int) error {
tl, ok := l.(*net.TCPListener)
if !ok {
return fmt.Errorf("only tcp listener supported, called with %#v", l)
}
file, err := tl.File()
if err != nil {
return err
}
fd := int(file.Fd())
err = syscall.Listen(fd, backlog)
if err != nil {
return err
}
return nil
} You can find it here with the test programs: https://github.com/ldemailly/go-scratch/tree/main/backlog |
(Emphasis mine.) So, if I understand correctly, calling |
inode is the same so it does change the socket |
That is much better than I thought but still there's the problem that after a call to The same applies to messing with the socket parameters via the |
Not saying it’s a perfect solution, exposing backlog would be, yet meanwhile it can help someone. I would think listening sockets don’t really get closed until the server exits but I don’t know what happens if you need to close it before |
What version of Go are you using (
go version
)?Does this issue reproduce with the latest release?
Yes
What operating system and processor architecture are you using (
go env
)?go env
OutputWhat did you do?
I'm crossbuilding from Linux to Windows (but that is not important here). When 500 clients connect simultaneously to my Go program (TCP server) it fails to accept them all because backlog queue gets filled. I process them as fast as possible. As a workaround, on Linux I fixed this by setting
net.core.somaxconn
kernel parameter. But on Windows server (at least on Microsoft Windows Server 2019 Datacenter) I didn't find similar OS based parameter/solution. For Windows I had to rebuild Go from source after changing net.maxListenerBacklog to return custom backlog value (for me SOMAXCONN_HINT(500)=-500 worked): https://github.com/golang/go/blob/master/src/net/sock_windows.go#L16. There are other workarounds probably but this was done just to verify if setting this parameter fixes the problem.What did you expect to see?
Some kind of way to set backlog parameter before listener starts to accept connections so that clients don't disconnect just because backlog is full in bursty scenarios.
What did you see instead?
Clients disconnected when backlog was full (size is 200 on my system).
The text was updated successfully, but these errors were encountered: