proposal: runtime: GC pacer redesign #44167
GC Pacer Redesign
Author: Michael Knyszek (with lots of input from Austin Clements, David Chase, and Jeremy Faller)
Go's tracing garbage collector runs concurrently with the application, and thus requires an algorithm to determine when to start a new cycle. In the runtime, this algorithm is referred to as the pacer. Until now, the garbage collector has framed this process as an optimization problem, utilizing a proportional controller to achieve a desired stopping-point (that is, the cycle completes just as the heap reaches a certain size) as well as a desired CPU utilization. While this approach has served Go well for a long time, the design has accrued many corner cases due to resolved issues, as well as a backlog of unresolved issues.
I propose redesigning the garbage collector's pacer from the ground up to capture the things it does well and eliminate the problems that have been discovered.
More specifically, I propose:
(1) will resolve long-standing issues with small heap sizes, allowing the Go garbage collector to scale down and act more predictably in general.
The text was updated successfully, but these errors were encountered:
By the way: this design feels solid to me, but has not gone through any rounds of feedback yet. In the interest of transparency, I'm hoping to get feedback and work on this here on GitHub going forward.
So, given that, I would not be surprised if there are errors in the document. Please take a look when you have a chance!
Do I understand correctly that the forcegcperiod is required because the current pacer does not consider non-heap sources of GC work? Is it necessary to call GC periodically in application with effectively zero heap allocation rate to collects stacks, etc.? If I understood your proposal correctly, it seems like it should be possible to remove these periodic calls of GC, and applications that don't create new goroutines and don't allocate anything on heap should never trigger garbage collections, which is a good benefit by itself.
Anyway, I have to look into this again so don't quote me. My memory is hazy. :) I'll dig into the reasons why next week (I don't see them documented anywhere).
For golang/go#44167. Change-Id: I468aa78edb8588b4e48008ad44cecc08544a8f48 Reviewed-on: https://go-review.googlesource.com/c/proposal/+/290489 Reviewed-by: Michael Pratt <email@example.com> Reviewed-by: Jeremy Faller <firstname.lastname@example.org>
A couple of the graphs were wrong (from the wrong scenario, that is) because I copied them in manually. Fatal mistake. Regenerate the graphs following the usual pipeline. Because there's a degree of jitter and randomness in these graphs they end up slightly different, but they're all mostly the same. By regenerating these graphs, it also adds a new line to each graph for the live heap size. I think this is nice for readability, so I'll let that get updated too. For golang/go#44167. Change-Id: I097f812ba07ca7fd740d8460e2830de6492b3945 Reviewed-on: https://go-review.googlesource.com/c/proposal/+/293790 Reviewed-by: Michael Pratt <email@example.com>
I realized I neglected to talk about initial conditions, even though all the simulations clearly set *something*. For golang/go#44167. Change-Id: Ia1727d5c068847e9192bf87bc1b6a5f0bb832303 Reviewed-on: https://go-review.googlesource.com/c/proposal/+/295509 Reviewed-by: Michael Pratt <firstname.lastname@example.org>