New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
runtime: automatically bump RLIMIT_NOFILE on Unix #46279
Comments
The limitation on I note that on my Debian system the soft and hard limits are both |
Yeah, I saw We'd still need a conditional mechanism regardless for FWIW, on my various Debian (buster) & Ubuntu (focal LTS, hirsute) machines here, I see 1024 & 1048576. |
GitHub code search says https://github.com/search?l=&p=2&q=unix.Select+language%3AGo&type=Code .... it's mostly wireguard-go's (cc @zx2c4 as FYI) |
This proposal sounds like a good idea, with the caveat that we probably shouldn't do it in initialization for -buildmode=shared. |
What happens today, even in programs that do nothing but file I/O (no select etc), is that if you open too many files you get errors. Auto-bumping would let those programs run longer. If Go did it at startup, it would be inherited by non-Go programs that we fork+exec. That is a potential incompatibility, but probably not a large one. Technically, I suppose we could undo it in the subprocess between fork and exec. |
This proposal has been added to the active column of the proposals project |
To summarize the limitating use cases where we should not be raising the soft limit.
|
One problem with restoring the limit in exec is we won't know if the limit was intentionally changed by the program in the interim. What about programs that explicitly raise the limit and then exec today? We would be dropping it back down. It seems like if we are going to raise the limit, we should just do that, not try to put it back. I just ran into this problem with gofmt on my Mac, where the limit defaults to 256 (and gofmt was editing many files in parallel). I'd love for Go to raise the limit there too. How much does it really matter if we raise the limit for a subprocess? People can always set the hard limit if they want Go not to try to bump the soft limit up. |
It's pretty awful that the limit is breaking completely reasonable Go programs like gofmt -w. It's very hard to see any programs benefiting from this limit in practice anymore. |
I think that seems quite reasonable. We can even document this in |
Not sure anyone is using syscall.Select for fd's anyway. |
Based on the discussion above, this proposal seems like a likely accept. |
Should the title be updated to mention Unix or something instead of Linux? |
The considerations may be different on different Unix systems. On Linux the details are somewhat specific to systemd. It may well be appropriate to do this on macOS also, but I don't know what the tradeoffs are there. Why does macOS have a default low limit? |
From what I was able to find, that default goes back to the very first OS X release and probably even back to BSD. The constant is there. Of course, not doing that on macOS is not a deal-breaker but an annoyance. |
The only issue I am aware of that can arise if RLIMIT_NOFILE is set to a very high value is, some binaries (that may be executed from a Go program and thus inherit the limit) want to do something like this (pseudocode): for fd := 3; fd < getrlimit(RLIMIT_NOFILE); fd++ {
close(fd) // or set CLOEXEC flag
} For a specific example, Most probably this should not be an issue, since Docker also does a similar thing (moby/moby#38814) and since everyone seems to be using containers now, let's hope that issues like this are fixed (yet better, maybe some programs have even started using Also, this is surely not a showstopper to accept the proposal -- just something to keep in mind. |
No change in consensus, so accepted. |
Change https://go.dev/cl/392415 mentions this issue: |
Change https://go.dev/cl/393016 mentions this issue: |
The test for this change is failing on at least three builders — looks like we may need to plumb in |
This reverts CL 392415. Reason for revert: new test is failing on at least darwin-amd64-10_14, darwin-amd64-10_15, and openbsd-arm64-jsing. Updates #46279. Change-Id: I2890b72f8ee74f31000d65f7d47b5bb0ed5d6007 Reviewed-on: https://go-review.googlesource.com/c/go/+/393016 Trust: Bryan Mills <bcmills@google.com> Run-TryBot: Bryan Mills <bcmills@google.com> Reviewed-by: Russ Cox <rsc@golang.org>
I'm confused about needing On my Mac with macOS 12.2:
I've always used 'ulimit -n unlimited' without trouble on Macs. I wonder if the struct definitions are wrong. |
It looks like the macOS |
So I suppose one option might be to try the |
Some more info at #40564. |
I put in a call to sysctl kern.maxfilesperproc. Hopefully that exists on the older macOS. And I skipped the OpenBSD failure entirely. (It is not a first-class port.) |
Change https://go.dev/cl/393354 mentions this issue: |
Change https://go.dev/cl/394094 mentions this issue: |
For #46279 For #51713 Change-Id: I444f309999bf5576449a46a9808b23cf6537e7dd Reviewed-on: https://go-review.googlesource.com/c/go/+/394094 Trust: Ian Lance Taylor <iant@golang.org> Run-TryBot: Ian Lance Taylor <iant@golang.org> Auto-Submit: Ian Lance Taylor <iant@golang.org> TryBot-Result: Gopher Robot <gobot@golang.org> Reviewed-by: Joel Sing <joel@sing.id.au>
- golang/go#46279 - http://0pointer.net/blog/file-descriptor-limits.html The I/O runtime, Tokio and Mio, in this Project won't use select(2), so it is safe to bump the RLIMIT_NOFILE soft limit to hard limit automatically.
…anges we have done to rlimit
I just read http://0pointer.net/blog/file-descriptor-limits.html which in a nutshell says:
select
select
usersselect
won't work.I realize that since Go doesn't use select, the Go runtime could automatically do this fd soft limit bumping on Linux.
We do have a Select wrapper at https://pkg.go.dev/golang.org/x/sys/unix#Select, though, so perhaps we could do the same thing we did for #42347 in 18510ae (https://go-review.googlesource.com/c/go/+/299671) and do the bumping conditionally based on whether the
unix.Select
func is in the binary. Orcgo
too, I suppose.I suspect many users are unaware of this 512K hard limit that's free to bump up to. I certainly was unaware. (I normally have to go in and manual tweak my systemd limits instead, usually in response to problems once I hit the limit...) I think fixing it automatically would help more users than it'd hurt. (I actually can't think how it'd hurt anybody?)
I don't think we need it as a backpressure mechanism. As the blog post mentions, memory limits are already that mechanism.
/cc @ianlancetaylor @aclements @rsc @randall77
The text was updated successfully, but these errors were encountered: