Skip to content

Commit fe8d238

Browse files
aaronluPeter Zijlstra
authored andcommitted
sched/fair: Propagate load for throttled cfs_rq
Before task based throttle model, propagating load will stop at a throttled cfs_rq and that propagate will happen on unthrottle time by update_load_avg(). Now that there is no update_load_avg() on unthrottle for throttled cfs_rq and all load tracking is done by task related operations, let the propagate happen immediately. While at it, add a comment to explain why cfs_rqs that are not affected by throttle have to be added to leaf cfs_rq list in propagate_entity_cfs_rq() per my understanding of commit 0258bdf ("sched/fair: Fix unfairness caused by missing load decay"). Signed-off-by: Aaron Lu <ziqianlu@bytedance.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Chengming Zhou <chengming.zhou@linux.dev>
1 parent 5b726e9 commit fe8d238

File tree

1 file changed

+18
-8
lines changed

1 file changed

+18
-8
lines changed

kernel/sched/fair.c

Lines changed: 18 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -5729,6 +5729,11 @@ static inline int cfs_rq_throttled(struct cfs_rq *cfs_rq)
57295729
return cfs_bandwidth_used() && cfs_rq->throttled;
57305730
}
57315731

5732+
static inline bool cfs_rq_pelt_clock_throttled(struct cfs_rq *cfs_rq)
5733+
{
5734+
return cfs_bandwidth_used() && cfs_rq->pelt_clock_throttled;
5735+
}
5736+
57325737
/* check whether cfs_rq, or any parent, is throttled */
57335738
static inline int throttled_hierarchy(struct cfs_rq *cfs_rq)
57345739
{
@@ -6721,6 +6726,11 @@ static inline int cfs_rq_throttled(struct cfs_rq *cfs_rq)
67216726
return 0;
67226727
}
67236728

6729+
static inline bool cfs_rq_pelt_clock_throttled(struct cfs_rq *cfs_rq)
6730+
{
6731+
return false;
6732+
}
6733+
67246734
static inline int throttled_hierarchy(struct cfs_rq *cfs_rq)
67256735
{
67266736
return 0;
@@ -13151,10 +13161,13 @@ static void propagate_entity_cfs_rq(struct sched_entity *se)
1315113161
{
1315213162
struct cfs_rq *cfs_rq = cfs_rq_of(se);
1315313163

13154-
if (cfs_rq_throttled(cfs_rq))
13155-
return;
13156-
13157-
if (!throttled_hierarchy(cfs_rq))
13164+
/*
13165+
* If a task gets attached to this cfs_rq and before being queued,
13166+
* it gets migrated to another CPU due to reasons like affinity
13167+
* change, make sure this cfs_rq stays on leaf cfs_rq list to have
13168+
* that removed load decayed or it can cause faireness problem.
13169+
*/
13170+
if (!cfs_rq_pelt_clock_throttled(cfs_rq))
1315813171
list_add_leaf_cfs_rq(cfs_rq);
1315913172

1316013173
/* Start to propagate at parent */
@@ -13165,10 +13178,7 @@ static void propagate_entity_cfs_rq(struct sched_entity *se)
1316513178

1316613179
update_load_avg(cfs_rq, se, UPDATE_TG);
1316713180

13168-
if (cfs_rq_throttled(cfs_rq))
13169-
break;
13170-
13171-
if (!throttled_hierarchy(cfs_rq))
13181+
if (!cfs_rq_pelt_clock_throttled(cfs_rq))
1317213182
list_add_leaf_cfs_rq(cfs_rq);
1317313183
}
1317413184
}

0 commit comments

Comments
 (0)