Skip to content

Commit 0f54f63

Browse files
htejungregkh
authored andcommitted
sched_ext: Read scx_root under scx_cgroup_ops_rwsem in cgroup setters
commit 80afd4c upstream. scx_group_set_{weight,idle,bandwidth}() cache scx_root before acquiring scx_cgroup_ops_rwsem, so the pointer can be stale by the time the op runs. If the loaded scheduler is disabled and freed (via RCU work) and another is enabled between the naked load and the rwsem acquire, the reader sees scx_cgroup_enabled=true (the new scheduler's) but dereferences the freed one - UAF on SCX_HAS_OP(sch, ...) / SCX_CALL_OP(sch, ...). scx_cgroup_enabled is toggled only under scx_cgroup_ops_rwsem write (scx_cgroup_{init,exit}), so reading scx_root inside the rwsem read section correlates @sch with the enabled snapshot. Fixes: a5bd6ba ("sched_ext: Use cgroup_lock/unlock() to synchronize against cgroup operations") Cc: stable@vger.kernel.org # v6.18+ Reported-by: Chris Mason <clm@meta.com> Signed-off-by: Tejun Heo <tj@kernel.org> Reviewed-by: Andrea Righi <arighi@nvidia.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
1 parent f3d0cd9 commit 0f54f63

1 file changed

Lines changed: 6 additions & 3 deletions

File tree

kernel/sched/ext.c

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3430,9 +3430,10 @@ void scx_cgroup_cancel_attach(struct cgroup_taskset *tset)
34303430

34313431
void scx_group_set_weight(struct task_group *tg, unsigned long weight)
34323432
{
3433-
struct scx_sched *sch = scx_root;
3433+
struct scx_sched *sch;
34343434

34353435
percpu_down_read(&scx_cgroup_ops_rwsem);
3436+
sch = scx_root;
34363437

34373438
if (scx_cgroup_enabled && SCX_HAS_OP(sch, cgroup_set_weight) &&
34383439
tg->scx.weight != weight)
@@ -3446,9 +3447,10 @@ void scx_group_set_weight(struct task_group *tg, unsigned long weight)
34463447

34473448
void scx_group_set_idle(struct task_group *tg, bool idle)
34483449
{
3449-
struct scx_sched *sch = scx_root;
3450+
struct scx_sched *sch;
34503451

34513452
percpu_down_read(&scx_cgroup_ops_rwsem);
3453+
sch = scx_root;
34523454

34533455
if (scx_cgroup_enabled && SCX_HAS_OP(sch, cgroup_set_idle))
34543456
SCX_CALL_OP(sch, SCX_KF_UNLOCKED, cgroup_set_idle, NULL,
@@ -3463,9 +3465,10 @@ void scx_group_set_idle(struct task_group *tg, bool idle)
34633465
void scx_group_set_bandwidth(struct task_group *tg,
34643466
u64 period_us, u64 quota_us, u64 burst_us)
34653467
{
3466-
struct scx_sched *sch = scx_root;
3468+
struct scx_sched *sch;
34673469

34683470
percpu_down_read(&scx_cgroup_ops_rwsem);
3471+
sch = scx_root;
34693472

34703473
if (scx_cgroup_enabled && SCX_HAS_OP(sch, cgroup_set_bandwidth) &&
34713474
(tg->scx.bw_period_us != period_us ||

0 commit comments

Comments
 (0)