You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Does this issue reproduce with the latest release?
What operating system and processor architecture are you using (go env)?
More of an issue on MacOS, but exists anywhere I guess.
What did you do?
Tried to limit memory usage. Specifically, I wanted the allocator to fail (yes, I know this panics and can't be recovered) rather than more memory to be used.
What did you expect to see?
I don't honestly know. Memory usage is poorly-defined and inconsistent at the best of times.
What did you see instead?
So, modern MacOS just doesn't let you do anything much like ulimit anymore for memory usage. And modern Linux pretty much always does overcommit, and then if too much memory gets used, it kills... Something. Arbitrarily. Which may or may not be the thing you wanted.
I'm looking at some code which has an unbounded memory usage problem, which I want to fix, and first I wanted to write a test case that would confirm that it was failing. And discovered that I can't sanely do this; I can't actually make the test fail due to excessive memory usage, because the amount of memory I can allocate (way over 60GB) takes long enough to populate that tests time out, and anyway a test that takes over ten minutes to run isn't a great fit for ever getting anything done.
And we have GOMEMLIMIT, but that's advisory. And what I sort of want is, roughly, a GOMEMLIMITHARD, where when runtime is about to request more memory from the OS, it checks whether doing so would cause the runtime's total memory-requested-from-OS to exceed the limit, and if it would, to fail as though a request from the OS was denied.
Q: But that will just kill your program!
A: Yes, exactly. It will kill this program, rather than causing the kernel to pick some arbitrary program to kill. That is, in fact, highly preferable.
Q: But you can just use the OS's limits!
A: Apparently not on MacOS in the last few years (?!?!). Still seems to work on Linux via ulimit -v, although it's a bit approximate, because memory "usage" is not well-defined. (For instance, mmapped files sometimes but not always count as some kinds of memory.)
The text was updated successfully, but these errors were encountered:
On linux you can use cgroups of other features to limit how much memory a process can use. This includes a whole bunch of other things that the go runtime does not track (cgo as pointed out by rittneje, ... some network buffers if I'm not incorrect).
I don't see much value in adding an inferior solution to the runtime.
Quick google searches suggest to me you can do the same thing on MacOS, Windows and Freebsd (given cgroups are linux specific).
Hm, I don't think this can work quite like GOMEMLIMIT because that's one way to get really bad death spirals. If the runtime knows that it can't exceed the limit, the only reasonable option is to try arbitrarily hard to stay under it. (Trying to decide what is "too hard" is also quite arbitrary and there's no clear reasonable policy decision when the alternative is to crash. With a soft limit the runtime is a bit more flexible and it's easier to come up with a reasonable policy: avoid a death spiral by going past the limit.) That's at least one reason why GOMEMLIMIT is a soft limit.
The only way I could see this working is if it exists completely outside of GOMEMLIMIT and GOGC and the garbage collector completely ignores the value. The runtime would just die if its statistics indicated it exceeded the limit. Thing is, in a concurrent/parallel runtime the statistics are never going to be perfectly accurate without adding serialization to hot paths. (And it sounds like you don't need perfect accuracy either.)
At which point it seems like one could implement something similar themselves by having a background goroutine that watches the same statistics GOMEMLIMIT functionality is watching (via the runtime/metrics package for instance), and exits the program when a threshold is exceeded. I'm not sure I follow why the runtime should be responsible for failing at the point of the request for more memory from the OS. If you're within 10-100ms of the memory being exceeded for the test, that doesn't seem too bad? Would this work for your use-case?
I also wouldn't recommend relying on ulimit -v in the long term. It counts PROT_NONE mappings that don't correspond to any physical memory and are considered cheap (see https://go.dev/doc/gc-guide#A_note_about_virtual_memory). The Go runtime also makes a few large-ish up-front read/write mappings that also count against virtual memory footprint, but are usually mostly unbacked by physical memory. Notably:
The Go runtime makes a few very large PROT_NONE mappings for the page allocator index.
The Go runtime reserves up to 512 MiB of address space as PROT_NONE on 32-bit platforms.
The Go runtime maps O(MiB) of memory as read/write for a performance-critical data structure, and relies on demand paging to keep the real memory footprint low.
The race detector (TSAN runtime) makes a large up-front read/write mapping that also relies on demand paging to keep the real memory footprint low.