Skip to content

Commit 3942a9b

Browse files
Peter ZijlstraIngo Molnar
authored andcommitted
locking, rcu, cgroup: Avoid synchronize_sched() in __cgroup_procs_write()
The current percpu-rwsem read side is entirely free of serializing insns at the cost of having a synchronize_sched() in the write path. The latency of the synchronize_sched() is too high for cgroups. The commit 1ed1328 talks about the write path being a fairly cold path but this is not the case for Android which moves task to the foreground cgroup and back around binder IPC calls from foreground processes to background processes, so it is significantly hotter than human initiated operations. Switch cgroup_threadgroup_rwsem into the slow mode for now to avoid the problem, hopefully it should not be that slow after another commit: 80127a3 ("locking/percpu-rwsem: Optimize readers and reduce global impact"). We could just add rcu_sync_enter() into cgroup_init() but we do not want another synchronize_sched() at boot time, so this patch adds the new helper which doesn't block but currently can only be called before the first use. Reported-by: John Stultz <john.stultz@linaro.org> Reported-by: Dmitry Shmidt <dimitrysh@google.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Colin Cross <ccross@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rom Lemarchand <romlem@google.com> Cc: Tejun Heo <tj@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Todd Kjos <tkjos@google.com> Link: http://lkml.kernel.org/r/20160811165413.GA22807@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
1 parent e8cb0fe commit 3942a9b

File tree

3 files changed

+19
-0
lines changed

3 files changed

+19
-0
lines changed

include/linux/rcu_sync.h

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -59,6 +59,7 @@ static inline bool rcu_sync_is_idle(struct rcu_sync *rsp)
5959
}
6060

6161
extern void rcu_sync_init(struct rcu_sync *, enum rcu_sync_type);
62+
extern void rcu_sync_enter_start(struct rcu_sync *);
6263
extern void rcu_sync_enter(struct rcu_sync *);
6364
extern void rcu_sync_exit(struct rcu_sync *);
6465
extern void rcu_sync_dtor(struct rcu_sync *);

kernel/cgroup.c

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5606,6 +5606,12 @@ int __init cgroup_init(void)
56065606
BUG_ON(cgroup_init_cftypes(NULL, cgroup_dfl_base_files));
56075607
BUG_ON(cgroup_init_cftypes(NULL, cgroup_legacy_base_files));
56085608

5609+
/*
5610+
* The latency of the synchronize_sched() is too high for cgroups,
5611+
* avoid it at the cost of forcing all readers into the slow path.
5612+
*/
5613+
rcu_sync_enter_start(&cgroup_threadgroup_rwsem.rss);
5614+
56095615
get_user_ns(init_cgroup_ns.user_ns);
56105616

56115617
mutex_lock(&cgroup_mutex);

kernel/rcu/sync.c

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -84,6 +84,18 @@ void rcu_sync_init(struct rcu_sync *rsp, enum rcu_sync_type type)
8484
rsp->gp_type = type;
8585
}
8686

87+
/**
88+
* Must be called after rcu_sync_init() and before first use.
89+
*
90+
* Ensures rcu_sync_is_idle() returns false and rcu_sync_{enter,exit}()
91+
* pairs turn into NO-OPs.
92+
*/
93+
void rcu_sync_enter_start(struct rcu_sync *rsp)
94+
{
95+
rsp->gp_count++;
96+
rsp->gp_state = GP_PASSED;
97+
}
98+
8799
/**
88100
* rcu_sync_enter() - Force readers onto slowpath
89101
* @rsp: Pointer to rcu_sync structure to use for synchronization

0 commit comments

Comments
 (0)