Skip to content

Commit

Permalink
sched: Store restrict_cpus_allowed_ptr() call state
Browse files Browse the repository at this point in the history
The user_cpus_ptr field was originally added by commit b90ca8b
("sched: Introduce task_struct::user_cpus_ptr to track requested
affinity"). It was used only by arm64 arch due to possible asymmetric
CPU setup.

Since commit 8f9ea86 ("sched: Always preserve the user requested
cpumask"), task_struct::user_cpus_ptr is repurposed to store user
requested cpu affinity specified in the sched_setaffinity().

This results in a slight performance regression on an arm64
system when booted with "allow_mismatched_32bit_el0"
on the command-line.  The arch code will (amongst
other things) calls force_compatible_cpus_allowed_ptr() and
relax_compatible_cpus_allowed_ptr() when exec()'ing a 32-bit or a 64-bit
task respectively. Now a call to relax_compatible_cpus_allowed_ptr()
will always result in a __sched_setaffinity() call whether there is a
previous force_compatible_cpus_allowed_ptr() call or not.

In order to fix this regression, a new scheduler flag
task_struct::cpus_allowed_restricted is now added to track if
force_compatible_cpus_allowed_ptr() has been called before or not. This
patch also updates the comments in force_compatible_cpus_allowed_ptr()
and relax_compatible_cpus_allowed_ptr() and handles their interaction
with sched_setaffinity().

This patch also removes the task_user_cpus() helper. In the case of
relax_compatible_cpus_allowed_ptr(), cpu_possible_mask as user_cpu_ptr
masking will be performed within __sched_setaffinity() anyway.

Fixes: 8f9ea86 ("sched: Always preserve the user requested cpumask")
Reported-by: Will Deacon <will@kernel.org>
Signed-off-by: Waiman Long <longman@redhat.com>
  • Loading branch information
Waiman Long authored and intel-lab-lkp committed Jan 28, 2023
1 parent 001c28e commit 25582b2
Show file tree
Hide file tree
Showing 3 changed files with 21 additions and 15 deletions.
3 changes: 3 additions & 0 deletions include/linux/sched.h
Expand Up @@ -886,6 +886,9 @@ struct task_struct {
unsigned sched_contributes_to_load:1;
unsigned sched_migrated:1;

/* restrict_cpus_allowed_ptr() bit, serialized by scheduler locks */
unsigned cpus_allowed_restricted:1;

/* Force alignment to the next boundary: */
unsigned :0;

Expand Down
25 changes: 17 additions & 8 deletions kernel/sched/core.c
Expand Up @@ -2957,6 +2957,10 @@ static int __set_cpus_allowed_ptr(struct task_struct *p,
struct rq *rq;

rq = task_rq_lock(p, &rf);

if (ctx->flags & SCA_CLR_RESTRICT)
p->cpus_allowed_restricted = 0;

/*
* Masking should be skipped if SCA_USER or any of the SCA_MIGRATE_*
* flags are set.
Expand All @@ -2983,8 +2987,8 @@ EXPORT_SYMBOL_GPL(set_cpus_allowed_ptr);
/*
* Change a given task's CPU affinity to the intersection of its current
* affinity mask and @subset_mask, writing the resulting mask to @new_mask.
* If user_cpus_ptr is defined, use it as the basis for restricting CPU
* affinity or use cpu_online_mask instead.
* The cpus_allowed_restricted bit is set to indicate to a later
* relax_compatible_cpus_allowed_ptr() call to relax the cpumask.
*
* If the resulting mask is empty, leave the affinity unchanged and return
* -EINVAL.
Expand All @@ -3002,6 +3006,7 @@ static int restrict_cpus_allowed_ptr(struct task_struct *p,
int err;

rq = task_rq_lock(p, &rf);
p->cpus_allowed_restricted = 1;

/*
* Forcefully restricting the affinity of a deadline task is
Expand All @@ -3013,7 +3018,8 @@ static int restrict_cpus_allowed_ptr(struct task_struct *p,
goto err_unlock;
}

if (!cpumask_and(new_mask, task_user_cpus(p), subset_mask)) {
if (p->user_cpu_ptr &&
!cpumask_and(new_mask, p->user_cpu_ptr, subset_mask)) {
err = -EINVAL;
goto err_unlock;
}
Expand All @@ -3027,9 +3033,8 @@ static int restrict_cpus_allowed_ptr(struct task_struct *p,

/*
* Restrict the CPU affinity of task @p so that it is a subset of
* task_cpu_possible_mask() and point @p->user_cpus_ptr to a copy of the
* old affinity mask. If the resulting mask is empty, we warn and walk
* up the cpuset hierarchy until we find a suitable mask.
* task_cpu_possible_mask(). If the resulting mask is empty, we warn
* and walk up the cpuset hierarchy until we find a suitable mask.
*/
void force_compatible_cpus_allowed_ptr(struct task_struct *p)
{
Expand Down Expand Up @@ -3083,11 +3088,15 @@ __sched_setaffinity(struct task_struct *p, struct affinity_context *ctx);
void relax_compatible_cpus_allowed_ptr(struct task_struct *p)
{
struct affinity_context ac = {
.new_mask = task_user_cpus(p),
.flags = 0,
.new_mask = cpu_possible_mask;
.flags = SCA_CLR_RESTRICT,
};
int ret;

/* Return if no previous force_compatible_cpus_allowed_ptr() call */
if (!data_race(p->cpus_allowed_restricted))
return;

/*
* Try to restore the old affinity mask with __sched_setaffinity().
* Cpuset masking will be done there too.
Expand Down
8 changes: 1 addition & 7 deletions kernel/sched/sched.h
Expand Up @@ -1887,13 +1887,6 @@ static inline void dirty_sched_domain_sysctl(int cpu)
#endif

extern int sched_update_scaling(void);

static inline const struct cpumask *task_user_cpus(struct task_struct *p)
{
if (!p->user_cpus_ptr)
return cpu_possible_mask; /* &init_task.cpus_mask */
return p->user_cpus_ptr;
}
#endif /* CONFIG_SMP */

#include "stats.h"
Expand Down Expand Up @@ -2299,6 +2292,7 @@ extern struct task_struct *pick_next_task_idle(struct rq *rq);
#define SCA_MIGRATE_DISABLE 0x02
#define SCA_MIGRATE_ENABLE 0x04
#define SCA_USER 0x08
#define SCA_CLR_RESTRICT 0x10

#ifdef CONFIG_SMP

Expand Down

0 comments on commit 25582b2

Please sign in to comment.