Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

runtime: background scavenger is overzealous with small heaps #32012

Closed
mknyszek opened this issue May 13, 2019 · 3 comments
Closed

runtime: background scavenger is overzealous with small heaps #32012

mknyszek opened this issue May 13, 2019 · 3 comments
Labels
FrozenDueToAge NeedsFix The path to resolution is known, but the work has not been done.
Milestone

Comments

@mknyszek
Copy link
Contributor

Today the background scavenger will treat newly-mapped heap memory as unscavenged, which means that if you have an application with a small heap (such as 24 MiB) on Linux, then it will scavenge 64 - 24 = 40 MiB, which is a fair bit of work to be doing.

However, those 40 MiB were never touched in the first place, so they haven't been faulted in and don't count against your RSS anyway. The scavenging work is totally unnecessary.

Instead, we should probably treat newly-mapped memory as scavenged so that this issue is avoided. It also more accurately represents reality, and by tracking this in HeapReleased, the runtime will have a better measure of the application's heap RSS.

The one caveat here is that on Windows we currently map new heap memory as MEM_COMMIT|MEM_RESERVE which means that Windows counts all of that against your RSS. But there's a clear solution here too: if we just map memory as MEM_RESERVE and then only map it as MEM_COMMIT upon allocation, we can get this right.

There isn't currently a noticeable performance impact from this so I don't think this is terribly high priority, but it could affect the "warm-up" time of applications on systems without huge pages and with larger arena sizes (e.g. Linux w/o THP, AIX, etc.). Aside from that, it would also be nice to just make the accounting a little bit better here for small heaps.

CC @aclements

@mknyszek mknyszek self-assigned this May 13, 2019
@mknyszek mknyszek added the NeedsFix The path to resolution is known, but the work has not been done. label May 13, 2019
@mknyszek mknyszek modified the milestones: Go1.13, Go1.14 May 13, 2019
@mknyszek
Copy link
Contributor Author

Another thought occurred to me: this may obviate the need for heap-growth scavenging, since that case would now be covered by scavenging when allocating from scavenged memory.

@gopherbot
Copy link
Contributor

Change https://golang.org/cl/177097 mentions this issue: runtime: mark newly-mapped memory as scavenged

@gopherbot
Copy link
Contributor

Change https://golang.org/cl/177237 mentions this issue: runtime: make TestPhysicalMemoryUtilization more robust

gopherbot pushed a commit that referenced this issue May 20, 2019
Currently, this test allocates many objects and relies on heap-growth
scavenging to happen unconditionally on heap-growth. However with the
new pacing system for the scavenging, this is no longer true and the
test is flaky.

So, this change overhauls TestPhysicalMemoryUtilization to check the
same aspect of the runtime, but in a much more robust way.

Firstly, it sets up a much more constrained scenario: only 5 objects are
allocated total with a maximum worst-case (i.e. the test fails) memory
footprint of about 16 MiB. The test is now aware that scavenging will
only happen if the heap growth causes us to push way past our scavenge
goal, which is based on the heap goal. So, it makes the holes in the
test much bigger and the actual retained allocations much smaller to
keep the heap goal at the heap's minimum size. It does this twice to
create exactly two unscavenged holes. Because the ratio between the size
of the "saved" objects and the "condemned" object is so small, two holes
are sufficient to create a consistent test.

Then, the test allocates one enormous object (the size of the 4 other
objects allocated, combined) with the intent that heap-growth scavenging
should kick in and scavenge the holes. The heap goal will rise after
this object is allocated, so it's very important we do all the
scavenging in a single allocation that exceeds the heap goal because
otherwise the rising heap goal could foil our test.

Finally, we check memory use relative to HeapAlloc as before. Since the
runtime should scavenge the entirety of the remaining holes,
theoretically there should be no more free and unscavenged memory.
However due to other allocations that may happen during the test we may
still see unscavenged memory, so we need to have some threshold. We keep
the current 10% threshold which, while arbitrary, is very conservative
and should easily account for any other allocations the test makes.

Before, we also had to ensure the allocations we were making looked
large relative to the size of a heap arena since newly-mapped memory was
considered unscavenged, and so that could significantly skew the test.
However, thanks to the fix for #32012 we were able to reduce memory use
to 16 MiB in the worst case.

Fixes #32010.

Change-Id: Ia38130481e292f581da7fa3289c98c99dc5394ed
Reviewed-on: https://go-review.googlesource.com/c/go/+/177237
Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>
@golang golang locked and limited conversation to collaborators May 15, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
FrozenDueToAge NeedsFix The path to resolution is known, but the work has not been done.
Projects
None yet
Development

No branches or pull requests

2 participants