runtime: scavenger is too eager on Darwin #36507
Currently on Darwin the scavenger is too eager and causing performance regressions. The issue mainly stems from the fact that in Go 1.14 the scavenger is paced empirically according to costs of scavenging, most of which comes from
However, the problem on Darwin is that we don't just do
So since we don't account for
The actual size of the regression can be quite large, up to 5%, as seen in #36218, so we should fix this before 1.14 goes out.
The fix here is relatively simple: we just need to account for this extra cost somehow. We could measure it directly in the runtime but this then slows down allocation unnecessarily, and even then it's unclear how we should attribute that cost to the scavenger (maybe as a debt it needs to pay down?). Trying to account for this cost on non-Darwin platforms is also tricky because the costs aren't actually coming from
Instead, I think it's a good idea to do something along the lines of what we did last release: get some empirical measurements and use that to get an order-of-magnitude approximation. In this particular case, I think we should compute an empirical ratio "r" of using a scavenged page and
Digging deeper, I think I know better why
I'm not sure what to do here. The patch I uploaded helps the problem a little bit, but to completely solve the problem would require figuring out exactly which pages are scavenged and only calling
Because the allocator doesn't propagate up which pages are scavenged (though it does know this information precisely!) currently we just do the heavy-handed thing. But, we could instead have the allocator, which actually clears the bits, do the
Unfortunately, lowering the
OK going to walk back my second thought. Turns out, yes, #36507 (comment) is a real problem. It's slightly worse now, but only by a little bit. This is basically going back to Go 1.11 behavior where if any memory was scavenged you would just scavenge the whole thing.