You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently we compute a conservative assist ratio that assumes the entire heap is reachable in order to prevent heap overshoot caused by underestimating. Unfortunately, this means we usually over-assist (by a factor of ~2 in steady state). This causes the GC cycle to finish below the heap goal and with high CPU utilization. This is okay to do for a few cycles, but the unfortunate part is that it's a stable state for the trigger controller, so we keep doing this (it scales the heap growth by the overutilization factor and finds that it would have been right on target with 25% utilization, so it doesn't change the trigger).
One solution we talked about early in the pacer development, but never put together a concrete design for, was to adjust the assist ratio as we approached the heap goal. This means the mutator may assist more toward the end of the cycle, but it will let us target the heap goal more precisely (sort of a "mid-flight correction"). We need to consider how to do this without allowing overshoot and without putting too much assist strain on the end of the cycle (which increases mutator latency). We also need to consider whether this affects how we compute the trigger error, which currently assumes linearity.
and the maximum idle that is soaked
In our case the GC amortize CPU% is 1%, but it works for 16 sec (out of 120) for 200% CPU (takes all the idle and consume some from our main gothread and in some rare case cause shortage in service)
Controling the 25% and maximum CPU idle consumption will give us more control on the overshoot.
IS this an option?
and the maximum idle that is soaked In our case the GC amortize CPU% is 1%, but it works for 16 sec (out of 120) for 200% CPU (takes all the idle and consume some from our main gothread and in some rare case cause shortage in service)
Controling the 25% and maximum CPU idle consumption will give us more control on the overshoot. IS this an option?
our issue was solved by v1.19 the GC does not takes all the idle and by that keep our threads working
Currently we compute a conservative assist ratio that assumes the entire heap is reachable in order to prevent heap overshoot caused by underestimating. Unfortunately, this means we usually over-assist (by a factor of ~2 in steady state). This causes the GC cycle to finish below the heap goal and with high CPU utilization. This is okay to do for a few cycles, but the unfortunate part is that it's a stable state for the trigger controller, so we keep doing this (it scales the heap growth by the overutilization factor and finds that it would have been right on target with 25% utilization, so it doesn't change the trigger).
One solution we talked about early in the pacer development, but never put together a concrete design for, was to adjust the assist ratio as we approached the heap goal. This means the mutator may assist more toward the end of the cycle, but it will let us target the heap goal more precisely (sort of a "mid-flight correction"). We need to consider how to do this without allowing overshoot and without putting too much assist strain on the end of the cycle (which increases mutator latency). We also need to consider whether this affects how we compute the trigger error, which currently assumes linearity.
/cc @RLH
The text was updated successfully, but these errors were encountered: