Skip to content

Commit d319808

Browse files
JoonsooKimIngo Molnar
authored andcommitted
sched: Move up affinity check to mitigate useless redoing overhead
Currently, LBF_ALL_PINNED is cleared after affinity check is passed. So, if task migration is skipped by small load value or small imbalance value in move_tasks(), we don't clear LBF_ALL_PINNED. At last, we trigger 'redo' in load_balance(). Imbalance value is often so small that any tasks cannot be moved to other cpus and, of course, this situation may be continued after we change the target cpu. So this patch move up affinity check code and clear LBF_ALL_PINNED before evaluating load value in order to mitigate useless redoing overhead. In addition, re-order some comments correctly. Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Tested-by: Jason Low <jason.low2@hp.com> Cc: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com> Cc: Davidlohr Bueso <davidlohr.bueso@hp.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1366705662-3587-5-git-send-email-iamjoonsoo.kim@lge.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
1 parent cfc0311 commit d319808

File tree

1 file changed

+7
-9
lines changed

1 file changed

+7
-9
lines changed

kernel/sched/fair.c

Lines changed: 7 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -3896,10 +3896,14 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env)
38963896
int tsk_cache_hot = 0;
38973897
/*
38983898
* We do not migrate tasks that are:
3899-
* 1) running (obviously), or
3899+
* 1) throttled_lb_pair, or
39003900
* 2) cannot be migrated to this CPU due to cpus_allowed, or
3901-
* 3) are cache-hot on their current CPU.
3901+
* 3) running (obviously), or
3902+
* 4) are cache-hot on their current CPU.
39023903
*/
3904+
if (throttled_lb_pair(task_group(p), env->src_cpu, env->dst_cpu))
3905+
return 0;
3906+
39033907
if (!cpumask_test_cpu(env->dst_cpu, tsk_cpus_allowed(p))) {
39043908
int new_dst_cpu;
39053909

@@ -3967,9 +3971,6 @@ static int move_one_task(struct lb_env *env)
39673971
struct task_struct *p, *n;
39683972

39693973
list_for_each_entry_safe(p, n, &env->src_rq->cfs_tasks, se.group_node) {
3970-
if (throttled_lb_pair(task_group(p), env->src_rq->cpu, env->dst_cpu))
3971-
continue;
3972-
39733974
if (!can_migrate_task(p, env))
39743975
continue;
39753976

@@ -4021,7 +4022,7 @@ static int move_tasks(struct lb_env *env)
40214022
break;
40224023
}
40234024

4024-
if (throttled_lb_pair(task_group(p), env->src_cpu, env->dst_cpu))
4025+
if (!can_migrate_task(p, env))
40254026
goto next;
40264027

40274028
load = task_h_load(p);
@@ -4032,9 +4033,6 @@ static int move_tasks(struct lb_env *env)
40324033
if ((load / 2) > env->imbalance)
40334034
goto next;
40344035

4035-
if (!can_migrate_task(p, env))
4036-
goto next;
4037-
40384036
move_task(p, env);
40394037
pulled++;
40404038
env->imbalance -= load;

0 commit comments

Comments
 (0)