Currently we compute a conservative assist ratio that assumes the entire heap is reachable in order to prevent heap overshoot caused by underestimating. Unfortunately, this means we usually over-assist (by a factor of ~2 in steady state). This causes the GC cycle to finish below the heap goal and with high CPU utilization. This is okay to do for a few cycles, but the unfortunate part is that it's a stable state for the trigger controller, so we keep doing this (it scales the heap growth by the overutilization factor and finds that it would have been right on target with 25% utilization, so it doesn't change the trigger).
One solution we talked about early in the pacer development, but never put together a concrete design for, was to adjust the assist ratio as we approached the heap goal. This means the mutator may assist more toward the end of the cycle, but it will let us target the heap goal more precisely (sort of a "mid-flight correction"). We need to consider how to do this without allowing overshoot and without putting too much assist strain on the end of the cycle (which increases mutator latency). We also need to consider whether this affects how we compute the trigger error, which currently assumes linearity.
and the maximum idle that is soaked
In our case the GC amortize CPU% is 1%, but it works for 16 sec (out of 120) for 200% CPU (takes all the idle and consume some from our main gothread and in some rare case cause shortage in service)
Controling the 25% and maximum CPU idle consumption will give us more control on the overshoot.
IS this an option?