New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

proposal: runtime: add a mechanism for specifying a minimum target heap size #23044

Open
cespare opened this Issue Dec 8, 2017 · 17 comments

Comments

Projects
None yet
8 participants
@cespare
Contributor

cespare commented Dec 8, 2017

I propose that we add a GC knob (either an environment variable or a function in runtime/debug) which allows users to set the minimum target heap size for the garbage collector.

For now I'll call this setting GOGCMIN pending a better name.

Right now, the one GC tunable available to users is GOGC. From the runtime documentation:

The GOGC variable sets the initial garbage collection target percentage. A collection is triggered when the ratio of freshly allocated data to live data remaining after the previous collection reaches this percentage. The default is GOGC=100.

The idea is that when the target heap size is calculated based on live data and GOGC, if that target is less than GOGCMIN, then GOGCMIN is used instead.

The problem

It has often been noted that programs which make a lot of allocations while maintaining a small live heap end up doing excessive garbage collections.

At my company, we've noticed this a number of times. It's typically a problem with data processing applications which might read and write messages from queues at a large rate, yet keep very little data live over the long term.

We've had to address CPU usage due to such excessive garbage collections for at least three separate applications. Here are two real situations we observed:

  • 29 GB of system memory; 40 MB heap; 500 MB/sec allocation; 30 collections/sec
  • 480 GB of system memory; 100 MB heap; 700 MB/sec allocation; 12 collections/sec

In all cases, we would prefer that the application use a lot more memory in order to do fewer collections.

Existing workarounds

GOGC

The knob that's available for controlling this situation is GOGC, described above. When we've come across these issues in the past, we've set high GOGC values and that largely fixes the problem, at least in the short term.

Unfortunately, this is a fragile fix. We don't actually care about the GOGC ratio; we want to target a particular heap size. So if we have a 40 MB heap, we might back into a GOGC value like 1000 or even 10000 in order to target a 400 MB or 4 GB heap size, respectively. With large GOGC values, the application is extremely sensitive to small increases in the live heap size: if our 40 MB heap increases to 100 MB (not a large jump), then our 4 GB target becomes 10 GB.

In fact, we recently had crashes with one application where we set GOGC=1200 many months ago when its typical heap size was a few hundred MB. The live data size increased to several GB and then the service started OOMing.

SetGCPercent

One way to address the shortcomings of GOGC is to dynamically adjust its value. This is possible by using runtime/debug.SetGCPercent.

We tried a solution that involved a long-running goroutine in every application watching memory use (via runtime.ReadMemStats) and adjusting SetGCPercent.

There are at least two problems with this approach:

  • We can't react to arbitrarily fast increases in heap size. If the heap is very small and we set SetGCPercent very high, we are prone to OOMing if the heap grows suddenly before we adjust SetGCPercent again. (And we don't want to make the adjustments too frequently because it's not cheap.)
  • As far as I can tell, the live heap size from the end of the previous collection is not exposed by runtime.MemStats. (I think it's only available by parsing gctrace output.) This is the number we really need in order to accurately pick a GOGC value.

Heap ballast

We eventually settled on an awful workaround: we have a long-running goroutine manage a set of dummy allocations (ballast). When the heap is small, the ballast is large; if the live heap reaches the target size, the ballast shrinks to zero.

We can't pick the ballast as accurately as we would like because, again, we need the live heap size from the previous GC cycle. But by using the total non-ballast heap size as a conservative proxy, the solution works well enough. In particular, by keeping GOGC at a normal level (usually 100), we aren't subject to the heap size spike issue.

Obviously this isn't a great solution for the long term since it wastes memory that could otherwise be used for something else (like disk cache).

Related discussions

A related idea is to have a mechanism for limiting the max heap size (see #16843 and other linked discussions). However, I believe that a min size is a much simpler problem to solve since it doesn't require application coordination (backpressure).

Also related to the old issue #9067.


I'm happy to give this the full proposal document treatment if that's useful. It seems like a simple idea that doesn't necessarily need it.

Based on my limited understanding of the garbage collector, this would be easy to implement and wouldn't add much complexity to the GC.

/cc @aclements @RLH

@cespare cespare added the Proposal label Dec 8, 2017

@cespare cespare added this to the Go1.11 milestone Dec 8, 2017

@RLH

This comment has been minimized.

Contributor

RLH commented Dec 20, 2017

The root cause is that the GC does not know how much RAM is available so it is understandable conservative. If the GC knows how much RAM it has to play with then the problem could be transparently handled by the GC. As the related discussion notes, #16843 proposes a mechanism for specifying a maximum heap size. Once the GC knows the maximum heap available it can use GC frequency to help determine when to start the next GC cycle. One idea is to increase heap size until the GC runs at a reasonable frequency or the maximum heap size is reached. While this proposal is much simpler it does not address many of the issues #16843 addresses.

@cespare

This comment has been minimized.

Contributor

cespare commented Dec 20, 2017

@RLH I agree that some solutions to #16843 might address this, but frankly the discussion around that problem has not looked promising from the outside. It seems like the theoretical and technical barriers to a good max-heap API are large.

Meanwhile, the GC frequency problems described in this issue are -- to us, at least -- more pressing than the max-heap issue, admit a much simpler solution, and have only bad workarounds today.

@RLH

This comment has been minimized.

Contributor

RLH commented Dec 21, 2017

We are working our way through the issues related to #16843. We have a prototype implementation at https://go-review.googlesource.com/c/go/+/46751 the community is welcome to comment on. Such public prototypes are intended to address the implementation barriers while discussion on the theoretical barriers are part of the process. Adding knobs to the GC is very heavy weight process.

That said your problems are real. Perhaps a discussion concerning what the GC should use internally as the default minimum heap size, currently 4MB, is warranted. For example would a default heap size derived from typically cloud virtual machine instances be a reasonable start? In December 2017 a standard 4 CPU instances (think GOMAXPROCS) comes with 16GB. A "high CPU" 4 CPU instance come with 3.6 GB. #16854 could be used to limit heap size to something below the default so small heap functionality wouldn't be lost.

@cespare

This comment has been minimized.

Contributor

cespare commented Dec 21, 2017

We are working our way through the issues related to #16843. We have a prototype implementation at https://go-review.googlesource.com/c/go/+/46751 the community is welcome to comment on. Such public prototypes are intended to address the implementation barriers while discussion on the theoretical barriers are part of the process. Adding knobs to the GC is very heavy weight process.

Thanks. It seems like CL 46751 doesn't address my issue, though. (At least I don't see how from the documentation; I haven't understood all of the code.)

Furthermore, I don't necessarily have any kind of pushback mechanism to react to the notifications and I would not like to replace the current fast-OOM behavior on excessively large heaps (good) with any kind of thrashing/death spiral (bad).

That said your problems are real. Perhaps a discussion concerning what the GC should use internally as the default minimum heap size, currently 4MB, is warranted. For example would a default heap size derived from typically cloud virtual machine instances be a reasonable start? In December 2017 a standard 4 CPU instances (think GOMAXPROCS) comes with 16GB. A "high CPU" 4 CPU instance come with 3.6 GB. #16854 could be used to limit heap size to something below the default so small heap functionality wouldn't be lost.

If the minimum were raised to, say, [40 MB × GOMAXPROCS] then that would be helpful to us.

I expect the people running Go on raspberry pis and such would have something to say about that though.

@RLH

This comment has been minimized.

Contributor

RLH commented Dec 21, 2017

@rsc

This comment has been minimized.

Contributor

rsc commented Jan 29, 2018

@elvarb

This comment has been minimized.

elvarb commented Feb 19, 2018

I have a similar problem but a different use case. The program is processing lots of data from disk and can run for a very long time, the memory usage of the Go program itself is not much of an issue but the server that it is running on does more. When the server is running other tasks, for example backups, it can run out of memory. This is not a problem for other applications but the Go application crashes when it wants a little bit more memory but is denied.

My preferred solution would be a setting on how to handle this situation.

  1. Default. Crash, helps prevent memory leaks. Have a Go service that leaks memory, have it crash and the service manager restart the service.
  2. Halt, keeps long running processes running. When memory can not be allocated, halt all processes and try again after a configurable time.
@artyom

This comment has been minimized.

Contributor

artyom commented Feb 19, 2018

@elvarb I believe you can already achieve behavior you described on Linux utilizing cgroups and their oom-related knobs: https://www.kernel.org/doc/Documentation/cgroup-v1/memory.txt

@elvarb

This comment has been minimized.

elvarb commented Feb 19, 2018

@artyom I'm running into this issue on Windows, does something like cgroups exist there?

@cespare

This comment has been minimized.

Contributor

cespare commented Feb 19, 2018

@elvarb what you're describing sounds quite different from the specific setting I'm requesting in this issue. I suggest you open a new issue to discuss it.

@cespare

This comment has been minimized.

Contributor

cespare commented Feb 23, 2018

I ran into this again today with another service that was doing 15 collections/sec. This one is a fairly simple webserver that receives 10-15k requests per second and turns them into protobuf messages that are sent along the network to other systems for processing.

@aclements

This comment has been minimized.

Member

aclements commented Mar 20, 2018

I think there are two basically orthogonal things going on here.

GC amortization failure

Assuming I understand the problem, I think what's actually going on here is a failure of the runtime to amortize GC costs. Currently, the GC goal (the payment) is in terms of heap size, but the actual GC cost is proportional not just to the heap size, but also to the size of the globals, the sizes of the stacks, and possibly some small fixed cost overhead. At large heaps, the relative contributions of the other factors tends toward zero, but at small heaps they can be a significant portion of the cost. (For the record, we've been here before: #19839 :)

@cespare, for the real situations where you've observed this, I'd love to know how much data those programs have in globals (add the sizes of the .data and .bss segments) and in stacks (roughly runtime.MemStats.StackInuse).

Here's a trivial example program to demonstrate my thinking: https://play.golang.org/p/k69Zo0C7M1F. Here's the measured GC wall clock time on my laptop with GOMAXPROCS=1 (not necessary, but keeps things predictable) with two different sizes of globals as the heap grows:

image

Perhaps more to the point, we can look at the proportional GC cost:

image

The 4MB minimum is meant to truncate away the really, really bad proportional cost at the left, but even with just 10MB of globals that's clearly not enough.

Given this, I propose that we tweak the definition of GOGC to be proportional to the total cost of GC, which includes at least heap, globals, and stacks. For applications with larger heaps, the difference probably won't be noticeable. But I believe this may solve the "small heap problem" without the need for extra knobs or potentially-dangerous rate limiting.

For example, consider a program that has 100MB of live heap and 100MB of globals. In the current scheme, the footprint grows to 300MB total, but GC has to scan 200MB for 100MB of growth, making GC twice as expensive as an equivalent program with 0MB of globals. In the proposed scheme, the footprint would grow to 400MB total, but GC would scan 200MB for 200MB of growth, so the GC cost is identical to the program with 0MB of globals.

SetMaxHeap

Fixing GC amortization may or may not fix your problem (I'll have a better sense if you can measure the globals and stacks). However, based on some of the things you said, I think SetMaxHeap might be a reasonable solution, or I might be picking quotes too carefully. :)

We don't actually care about the GOGC ratio; we want to target a particular heap size.

This sounds like exactly what SetMaxHeap does. If you know how big you want the heap, you can set GOGC to ~infinity (in TeX tradition, say infinity=10000) and put the heap entirely under the control of SetMaxHeap.

We tried a solution that involved a long-running goroutine in every application watching memory use (via runtime.ReadMemStats) and adjusting SetGCPercent.

I think the SetMaxHeap channel would let you do this sort of thing much more effectively. For example, if you're okay with the heap growing, you can use the channel to observe that the heap is under pressure and rather than reducing your application's heap usage (the normal use of the channel) you could raise the heap limit. This is cheap to do (doesn't trigger a GC). And it's okay if there's some lag because it's still a soft limit: in the worst case, the GC will expend some extra cycles trying to keep you under an unnecessary limit, but it's not going to OOM your process.

@cespare

This comment has been minimized.

Contributor

cespare commented Mar 21, 2018

@aclements thanks very much for your detailed consideration.

I pulled some metrics for one of my example programs. Let me know if you need more. (These numbers are after removing my "ballast" workaround.)

Program K:

entity size
.data (as reported by size -A) 69 KB
.bss (as reported by size -A) 135 KB
Heap size (as reported by runtime.MemStats.Alloc) 70 MB - 120 MB
StackInuse 31 MB
# of goroutines ~7300
Allocation rate 650 MB / sec
GC rate 12 collections / sec

Everything you're saying about SetMaxHeap sounds interesting. Should I be trying out CL 46751 and giving feedback for these use cases?

@cespare

This comment has been minimized.

Contributor

cespare commented Mar 21, 2018

From the above, I'm assuming that the .data and .bss sizes are insignificant and that we can point the finger at the stack:heap proportion.

I looked at 3 other programs where we've noticed a high GC rate (not all of these were using enough CPU to warrant addressing) and I can confirm that they all have similar .data/.bss sizes.

@rsc

This comment has been minimized.

Contributor

rsc commented Apr 23, 2018

Leaving for @aclements. Marking proposal-hold so we don't see it at proposal review but it's really just on hold for Austin to accept when the runtime team is ready.

@cespare

This comment has been minimized.

Contributor

cespare commented Aug 14, 2018

This has been noted elsewhere (don't have the issues handy) but I also want to point out that this "small heap" problem is pretty common when running benchmarks (i.e., with go test -bench ...). This is a scenario where heaps are typically pretty small and so if your benchmark includes allocations you can sometimes end up measuring the cost of doing a huge number of GCs -- much more than the real application sees.

(How to write and interpret benchmarks in the face of GC is a more general -- and difficult -- problem, of course, but benchmarks are the other place where I end up fiddling with GOGC and it would be much easier to explain GOMINHEAP or SetMaxHeap than GOGC=5000 in this context.)

@LK4D4

This comment has been minimized.

Contributor

LK4D4 commented Sep 18, 2018

In an effort to mitigate #18155 we reduced our heap size by ~90% (using sync.Pool mostly) and now facing this issue when we spend more time in GC cycle than outside of it.
We probably gonna use ballast way, but having SetMaxHeap would be nice.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment