Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
sched/fair: Introduce SIS_CURRENT to wake up short task on current CPU
[Problem Statement] For a workload that is doing frequent context switches, the throughput scales well until the number of instances reaches a peak point. After that peak point, the throughput drops significantly if the number of instances continue to increase. The will-it-scale context_switch1 test case exposes the issue. The test platform has 2 x 56C/112T and 224 CPUs in total. will-it-scale launches 1, 8, 16 ... instances respectively. Each instance is composed of 2 tasks, and each pair of tasks would do ping-pong scheduling via pipe_read() and pipe_write(). No task is bound to any CPU. It is found that, once the number of instances is higher than 56, the throughput drops accordingly: ^ throughput| | X | X X X | X X X | X X | X X | X | X | X | X | +-----------------.-------------------> 56 number of instances [Symptom analysis] One of the reasons to cause the performance downgrading is the high system idle percentage(around 20% ~ 30%). The CPUs waste a lot of time in idle and do nothing. As a comparison, if set CPU affinity to these workloads and stops them from migrating among CPUs, the idle percentage drops to nearly 0%, and the throughput increases a lot. This indicates room for optimization. The cause of high idle time is that there is no strict synchronization between select_task_rq() and the set of ttwu_pending flag among several CPUs. And this might be by design because the scheduler prefers parallel wakeup. Suppose there are nr_cpus pairs of ping-pong scheduling tasks. For example, p0' and p0 are ping-pong scheduling, so do p1' <=> p1, and p2'<=> p2. None of these tasks are bound to any CPUs. The problem can be summarized as: more than 1 wakers are stacked on 1 CPU, which slows down waking up their wakees: CPU0 CPU1 CPU2 p0' p1' => idle p2' try_to_wake_up(p0) try_to_wake_up(p2); CPU1 = select_task_rq(p0); CPU1 = select_task_rq(p2); ttwu_queue(p0, CPU1); ttwu_queue(p2, CPU1); __ttwu_queue_wakelist(p0, CPU1); WRITE_ONCE(CPU1->ttwu_pending, 1); __smp_call_single_queue(CPU1, p0); => ttwu_list->p0 quiting cpuidle_idle_call() __ttwu_queue_wakelist(p2, CPU1); WRITE_ONCE(CPU1->ttwu_pending, 1); ttwu_list->p2->p0 <= __smp_call_single_queue(CPU1, p2); p0' => idle sched_ttwu_pending() enqueue_task(p2 and p0) idle => p2 ... p2 time slice expires ... !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! <=== !!! p2 delays the wake up of p0' !!! !!! causes long idle on CPU0 !!! p2 => p0 !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! p0 wakes up p0' idle => p0' Since there are many waker/wakee pairs in the system, the chain reaction causes many CPUs to be victims. These idle CPUs wait for their waker to be scheduled. Tiancheng has mentioned the above issue here[1]. Besides the high idle percentage, waking up the tasks on different CPUs could bring Core-to-Core cache overhead, which hurts the performance. [Proposal] Waking up the short task on current CPU, if the following conditions are met: 1. waker A's rq->nr_running <= 1 2. waker A is a short duration task (waker will fall asleep soon) 3. wakee B is a short duration task (impact of B is minor to A) 4. A->wakee_flips is 0 and A->last_wakee = B 5. B->wakee_flips is 0 and B->last_wakee = A The reason is that, if the waker is a short-duration task, it might relinquish the CPU soon, and the wakee has the chance to be scheduled. On the other hand, if the wakee is a short duration task, putting it on non-idle CPU would bring minimal impact to the running task. The benefit of waking short task on current CPU: 1. Reduce race condition which causes high idle percentage. 2. Increase cache share between the waker and wakee. The threshold to define a short duration task is sysctl_sched_migration_cost. As suggested by Peter, this value is also used in task_hot() to prevent migrations. This wake up strategy is regarded as a dynamic WF_CURRENT_CPU[2] proposed by Andrei Vagin, except that this change treats the current CPU as the last resort when the previous CPU is not idle, and avoid tasks stacking on the current CPU as much as possible. [Benchmark results] The baseline is v6.3-rc7 tip:sched/core, on top of Commit f31dcb1 ("sched/clock: Fix local_clock() before sched_clock_init()"). The test platform Intel Sapphire Rapids has 2 x 56C/112T and 224 CPUs in total. C-states deeper than C1E are disabled. Turbo is disabled. CPU frequency governor is performance. Overall there is a universal improvement for netperf/tbench/will-it-scale, under different loads. And there is no significant impact on hackbench/schbench. will-it-scale ============= case load baseline compare% context_switch1 224 groups 1.00 +552.84% netperf ======= case load baseline(std%) compare%( std%) TCP_RR 56-threads 1.00 ( 1.96) +15.23 ( 4.67) TCP_RR 112-threads 1.00 ( 1.84) +88.83 ( 4.37) TCP_RR 168-threads 1.00 ( 0.41) +475.45 ( 4.45) TCP_RR 224-threads 1.00 ( 0.62) +806.85 ( 3.67) TCP_RR 280-threads 1.00 ( 65.80) +162.66 ( 10.26) TCP_RR 336-threads 1.00 ( 17.30) -0.19 ( 19.07) TCP_RR 392-threads 1.00 ( 26.88) +3.38 ( 28.91) TCP_RR 448-threads 1.00 ( 36.43) -0.26 ( 33.72) UDP_RR 56-threads 1.00 ( 7.91) +3.77 ( 17.48) UDP_RR 112-threads 1.00 ( 2.72) -15.02 ( 10.78) UDP_RR 168-threads 1.00 ( 8.86) +131.77 ( 13.30) UDP_RR 224-threads 1.00 ( 9.54) +178.73 ( 16.75) UDP_RR 280-threads 1.00 ( 15.40) +189.69 ( 19.36) UDP_RR 336-threads 1.00 ( 24.09) +0.54 ( 22.28) UDP_RR 392-threads 1.00 ( 39.63) -3.90 ( 33.77) UDP_RR 448-threads 1.00 ( 43.57) +1.57 ( 40.43) tbench ====== case load baseline(std%) compare%( std%) loopback 56-threads 1.00 ( 0.50) +10.78 ( 0.52) loopback 112-threads 1.00 ( 0.19) +2.73 ( 0.08) loopback 168-threads 1.00 ( 0.09) +173.72 ( 0.47) loopback 224-threads 1.00 ( 0.20) -2.13 ( 0.42) loopback 280-threads 1.00 ( 0.06) -0.77 ( 0.15) loopback 336-threads 1.00 ( 0.14) -0.08 ( 0.08) loopback 392-threads 1.00 ( 0.17) -0.27 ( 0.86) loopback 448-threads 1.00 ( 0.37) +0.32 ( 0.02) hackbench ========= case load baseline(std%) compare%( std%) process-pipe 1-groups 1.00 ( 0.94) -0.67 ( 0.45) process-pipe 2-groups 1.00 ( 3.22) -3.00 ( 3.35) process-pipe 4-groups 1.00 ( 1.66) -3.25 ( 1.87) process-sockets 1-groups 1.00 ( 0.70) +1.34 ( 0.44) process-sockets 2-groups 1.00 ( 0.24) +6.99 ( 11.23) process-sockets 4-groups 1.00 ( 0.61) +1.72 ( 0.57) threads-pipe 1-groups 1.00 ( 0.95) -0.66 ( 0.74) threads-pipe 2-groups 1.00 ( 0.79) -0.59 ( 2.10) threads-pipe 4-groups 1.00 ( 1.97) -1.23 ( 10.62) threads-sockets 1-groups 1.00 ( 0.73) -2.59 ( 1.32) threads-sockets 2-groups 1.00 ( 0.30) -1.95 ( 1.68) threads-sockets 4-groups 1.00 ( 1.22) +1.86 ( 0.73) schbench ======== case load baseline(std%) compare%( std%) normal 1-mthreads 1.00 ( 0.00) +0.88 ( 1.25) normal 2-mthreads 1.00 ( 2.09) +0.85 ( 2.44) normal 4-mthreads 1.00 ( 1.29) -1.82 ( 4.55) normal 8-mthreads 1.00 ( 1.22) +3.45 ( 1.26) Redis ===== Launch 224 instances of redis-server on machine A, launch 224 instances of redis-benchmark on machine B, measure the SET/GET latency on B. It was tested on a 1G NIC card. The 99th latency before vs after SIS_CURRENT did not change much. baseline sis_current SET 115 ms 116 ms GET 225 ms 228 ms Prateek tested this patch on a dual socket Zen3 system (2 x 64C/128T). tbench and netperf show good improvements at 128 clients. SpecJBB shows some improvement in max-jOPS: tip SIS_CURRENT SPECjbb2015 max-jOPS 100.00% 102.78% SPECjbb2015 Critical-jOPS 100.00% 100.00% Others are perf neutral. [1] https://lore.kernel.org/lkml/9ed75cad-3718-356f-21ca-1b8ec601f335@linux.alibaba.com/ [2] https://lore.kernel.org/lkml/20230308073201.3102738-3-avagin@google.com/ Suggested-by: Tim Chen <tim.c.chen@intel.com> Suggested-by: K Prateek Nayak <kprateek.nayak@amd.com> Tested-by: kernel test robot <yujie.liu@intel.com> Tested-by: K Prateek Nayak <kprateek.nayak@amd.com> Signed-off-by: Chen Yu <yu.c.chen@intel.com> Signed-off-by: Alexandre Frade <kernel@xanmod.org>
- Loading branch information