You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently the scavenger's goal is set to 1.1 * next_gc, and this goal is frequently compared against heap_sys - heap_released. Unfortunately this is somewhat of an apples-to-oranges comparison because next_gc is in terms of bytes of objects whereas the latter is in terms of bytes of pages. While pages contain objects, there could be some degree of fragmentation.
If this fragmentation is greater than 10% (i.e. exceeds the 1.1 factor above) it's possible that the scavenger might always think is has work to do, and it could end up over-scavenging significantly, leading to every new page-level allocation causing a page fault, only to be scavenged again immediately. In fact, we've seen exactly that with some internal code.
The clearest fix to me is to change the pacing to account for this fragmentation. Based on @aclements' advice, it probably makes more sense to just make heap_inuseat the end of the last GC (i.e. at mark termination) the basis for our goal, which is more of an apples-to-apples comparison. However, this loses the property of tracking next_gc, because we want to take advantage of knowing how much the heap will grow or shrink. In order to make this happen, we could take the ratio between the previous heap goal and the current heap goal, and multiply that by heap_inuse to obtain a goal. Of course, this makes the assumption that fragmentation will be relatively steady such that heap_inuse tracks the heap goal, but this assumes a steady-state degree of fragmentation which is a generally reasonable assumption to make.
The text was updated successfully, but these errors were encountered:
Currently the scavenger's goal is set to
1.1 * next_gc
, and this goal is frequently compared againstheap_sys - heap_released
. Unfortunately this is somewhat of an apples-to-oranges comparison becausenext_gc
is in terms of bytes of objects whereas the latter is in terms of bytes of pages. While pages contain objects, there could be some degree of fragmentation.If this fragmentation is greater than 10% (i.e. exceeds the 1.1 factor above) it's possible that the scavenger might always think is has work to do, and it could end up over-scavenging significantly, leading to every new page-level allocation causing a page fault, only to be scavenged again immediately. In fact, we've seen exactly that with some internal code.
The clearest fix to me is to change the pacing to account for this fragmentation. Based on @aclements' advice, it probably makes more sense to just make
heap_inuse
at the end of the last GC (i.e. at mark termination) the basis for our goal, which is more of an apples-to-apples comparison. However, this loses the property of trackingnext_gc
, because we want to take advantage of knowing how much the heap will grow or shrink. In order to make this happen, we could take the ratio between the previous heap goal and the current heap goal, and multiply that byheap_inuse
to obtain a goal. Of course, this makes the assumption that fragmentation will be relatively steady such thatheap_inuse
tracks the heap goal, but this assumes a steady-state degree of fragmentation which is a generally reasonable assumption to make.The text was updated successfully, but these errors were encountered: