From https://go.dev/doc/gc-guide#:~:text=Let%27s%20work%20through%20an%20example
It would have been nice to have certain things spelled out, since it wasn't all that obvious initially (at least to me). Maybe something like this?
Let's work through an example.
Assume some application allocates 10 MiB/s, the GC can scan at a rate
of 100 MiB/cpu-second (this is made up), and fixed GC costs are zero.
The steady state makes no assumptions about the size of the live heap,
but for simplicity, let's say this application's live heap is always 10 MiB.
(Note: a constant live heap does not mean that all newly allocated memory is dead.
It means that, after the GC runs, some mix of old and new heap memory is live.)
If each GC cycle happens exactly every 1 cpu-second, then our example application,
in the steady state, will have a 20 MiB total heap size on each GC cycle.
And with every GC cycle, the GC will need 0.1 cpu-seconds
-to do its work,
+to find and mark the 10 MiB of live memory in the 20 MiB heap,
resulting in a 10% overhead.
Now let's say each GC cycle happens less often, once every 2 cpu-seconds.
Then, our example application, in the steady state, will have a 30 MiB total
heap size on each GC cycle.
But with every GC cycle, the GC will still only need 0.1 cpu-seconds
-to do its work.
+to find and mark the 10 MiB of live memory, even though the heap is now 30 MiB.
+Since the GC only needs to walk through live memory,
+this extra 10 MiB of "dead" memory has no impact on its marking time.
So this means that our GC overhead just went down, from 10% to 5%, at the
cost of 50% more memory being used.
From https://go.dev/doc/gc-guide#:~:text=Let%27s%20work%20through%20an%20example
It would have been nice to have certain things spelled out, since it wasn't all that obvious initially (at least to me). Maybe something like this?