Skip to content

Commit 59aabfc

Browse files
longman88Ingo Molnar
authored andcommitted
locking/rwsem: Reduce spinlock contention in wakeup after up_read()/up_write()
In up_write()/up_read(), rwsem_wake() will be called whenever it detects that some writers/readers are waiting. The rwsem_wake() function will take the wait_lock and call __rwsem_do_wake() to do the real wakeup. For a heavily contended rwsem, doing a spin_lock() on wait_lock will cause further contention on the heavily contended rwsem cacheline resulting in delay in the completion of the up_read/up_write operations. This patch makes the wait_lock taking and the call to __rwsem_do_wake() optional if at least one spinning writer is present. The spinning writer will be able to take the rwsem and call rwsem_wake() later when it calls up_write(). With the presence of a spinning writer, rwsem_wake() will now try to acquire the lock using trylock. If that fails, it will just quit. Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Waiman Long <Waiman.Long@hp.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Davidlohr Bueso <dave@stgolabs.net> Acked-by: Jason Low <jason.low2@hp.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Douglas Hatch <doug.hatch@hp.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Scott J Norton <scott.norton@hp.com> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1430428337-16802-2-git-send-email-Waiman.Long@hp.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
1 parent 3e0283a commit 59aabfc

File tree

2 files changed

+49
-0
lines changed

2 files changed

+49
-0
lines changed

include/linux/osq_lock.h

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -32,4 +32,9 @@ static inline void osq_lock_init(struct optimistic_spin_queue *lock)
3232
extern bool osq_lock(struct optimistic_spin_queue *lock);
3333
extern void osq_unlock(struct optimistic_spin_queue *lock);
3434

35+
static inline bool osq_is_locked(struct optimistic_spin_queue *lock)
36+
{
37+
return atomic_read(&lock->tail) != OSQ_UNLOCKED_VAL;
38+
}
39+
3540
#endif

kernel/locking/rwsem-xadd.c

Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -409,11 +409,24 @@ static bool rwsem_optimistic_spin(struct rw_semaphore *sem)
409409
return taken;
410410
}
411411

412+
/*
413+
* Return true if the rwsem has active spinner
414+
*/
415+
static inline bool rwsem_has_spinner(struct rw_semaphore *sem)
416+
{
417+
return osq_is_locked(&sem->osq);
418+
}
419+
412420
#else
413421
static bool rwsem_optimistic_spin(struct rw_semaphore *sem)
414422
{
415423
return false;
416424
}
425+
426+
static inline bool rwsem_has_spinner(struct rw_semaphore *sem)
427+
{
428+
return false;
429+
}
417430
#endif
418431

419432
/*
@@ -496,7 +509,38 @@ struct rw_semaphore *rwsem_wake(struct rw_semaphore *sem)
496509
{
497510
unsigned long flags;
498511

512+
/*
513+
* If a spinner is present, it is not necessary to do the wakeup.
514+
* Try to do wakeup only if the trylock succeeds to minimize
515+
* spinlock contention which may introduce too much delay in the
516+
* unlock operation.
517+
*
518+
* spinning writer up_write/up_read caller
519+
* --------------- -----------------------
520+
* [S] osq_unlock() [L] osq
521+
* MB RMB
522+
* [RmW] rwsem_try_write_lock() [RmW] spin_trylock(wait_lock)
523+
*
524+
* Here, it is important to make sure that there won't be a missed
525+
* wakeup while the rwsem is free and the only spinning writer goes
526+
* to sleep without taking the rwsem. Even when the spinning writer
527+
* is just going to break out of the waiting loop, it will still do
528+
* a trylock in rwsem_down_write_failed() before sleeping. IOW, if
529+
* rwsem_has_spinner() is true, it will guarantee at least one
530+
* trylock attempt on the rwsem later on.
531+
*/
532+
if (rwsem_has_spinner(sem)) {
533+
/*
534+
* The smp_rmb() here is to make sure that the spinner
535+
* state is consulted before reading the wait_lock.
536+
*/
537+
smp_rmb();
538+
if (!raw_spin_trylock_irqsave(&sem->wait_lock, flags))
539+
return sem;
540+
goto locked;
541+
}
499542
raw_spin_lock_irqsave(&sem->wait_lock, flags);
543+
locked:
500544

501545
/* do nothing if list empty */
502546
if (!list_empty(&sem->wait_list))

0 commit comments

Comments
 (0)