Skip to content

Commit 08be8f6

Browse files
Waiman LongIngo Molnar
authored andcommitted
locking/pvstat: Separate wait_again and spurious wakeup stats
Currently there are overlap in the pvqspinlock wait_again and spurious_wakeup stat counters. Because of lock stealing, it is no longer possible to accurately determine if spurious wakeup has happened in the queue head. As they track both the queue node and queue head status, it is also hard to tell how many of those comes from the queue head and how many from the queue node. This patch changes the accounting rules so that spurious wakeup is only tracked in the queue node. The wait_again count, however, is only tracked in the queue head when the vCPU failed to acquire the lock after a vCPU kick. This should give a much better indication of the wait-kick dynamics in the queue node and the queue head. Signed-off-by: Waiman Long <Waiman.Long@hpe.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Douglas Hatch <doug.hatch@hpe.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Pan Xinhui <xinhui@linux.vnet.ibm.com> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Scott J Norton <scott.norton@hpe.com> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1464713631-1066-2-git-send-email-Waiman.Long@hpe.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
1 parent 64a5e3c commit 08be8f6

File tree

2 files changed

+5
-11
lines changed

2 files changed

+5
-11
lines changed

kernel/locking/qspinlock_paravirt.h

Lines changed: 3 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -288,12 +288,10 @@ static void pv_wait_node(struct mcs_spinlock *node, struct mcs_spinlock *prev)
288288
{
289289
struct pv_node *pn = (struct pv_node *)node;
290290
struct pv_node *pp = (struct pv_node *)prev;
291-
int waitcnt = 0;
292291
int loop;
293292
bool wait_early;
294293

295-
/* waitcnt processing will be compiled out if !QUEUED_LOCK_STAT */
296-
for (;; waitcnt++) {
294+
for (;;) {
297295
for (wait_early = false, loop = SPIN_THRESHOLD; loop; loop--) {
298296
if (READ_ONCE(node->locked))
299297
return;
@@ -317,7 +315,6 @@ static void pv_wait_node(struct mcs_spinlock *node, struct mcs_spinlock *prev)
317315

318316
if (!READ_ONCE(node->locked)) {
319317
qstat_inc(qstat_pv_wait_node, true);
320-
qstat_inc(qstat_pv_wait_again, waitcnt);
321318
qstat_inc(qstat_pv_wait_early, wait_early);
322319
pv_wait(&pn->state, vcpu_halted);
323320
}
@@ -458,12 +455,9 @@ pv_wait_head_or_lock(struct qspinlock *lock, struct mcs_spinlock *node)
458455
pv_wait(&l->locked, _Q_SLOW_VAL);
459456

460457
/*
461-
* The unlocker should have freed the lock before kicking the
462-
* CPU. So if the lock is still not free, it is a spurious
463-
* wakeup or another vCPU has stolen the lock. The current
464-
* vCPU should spin again.
458+
* Because of lock stealing, the queue head vCPU may not be
459+
* able to acquire the lock before it has to wait again.
465460
*/
466-
qstat_inc(qstat_pv_spurious_wakeup, READ_ONCE(l->locked));
467461
}
468462

469463
/*

kernel/locking/qspinlock_stat.h

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -24,8 +24,8 @@
2424
* pv_latency_wake - average latency (ns) from vCPU kick to wakeup
2525
* pv_lock_slowpath - # of locking operations via the slowpath
2626
* pv_lock_stealing - # of lock stealing operations
27-
* pv_spurious_wakeup - # of spurious wakeups
28-
* pv_wait_again - # of vCPU wait's that happened after a vCPU kick
27+
* pv_spurious_wakeup - # of spurious wakeups in non-head vCPUs
28+
* pv_wait_again - # of wait's after a queue head vCPU kick
2929
* pv_wait_early - # of early vCPU wait's
3030
* pv_wait_head - # of vCPU wait's at the queue head
3131
* pv_wait_node - # of vCPU wait's at a non-head queue node

0 commit comments

Comments
 (0)