New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add phase-fairness RW lock scheduling #20

Merged
merged 8 commits into from Feb 24, 2018

Conversation

Projects
None yet
1 participant
@yohhoy
Owner

yohhoy commented Feb 18, 2018

  • add phase-fairness scheduling policy
  • add phased shared-lock acquisition testcase (a0e897b)
  • add no writer starvation testcast (bb5402d)
  • update tests/perf_rwlock (ef43ad8)
  • update README (cd24036)
@yohhoy

This comment has been minimized.

Owner

yohhoy commented Feb 18, 2018

The following testcase cause sometime deadlock on CI environment.

TypeParam = yamc::fair::basic_shared_timed_mutex<yamc::rwlock::TaskFairness>
FairSharedTimedMutexTest/1.FifoTryLockUntil

https://travis-ci.org/yohhoy/yamc/jobs/343020786
https://ci.appveyor.com/project/yohhoy/yamc/build/1.0.128/job/1e4s4ss6q82kn6bq

@yohhoy

This comment has been minimized.

Owner

yohhoy commented Feb 19, 2018

master(728b6c5): T3/W call try_lock_{for,until} after Phase3, and timeout(*) on T1/R owns shared-lock.

        p1  p2  p3 p4 p5
T0/W: T=a=1=a=2=a=3=a=a=U..........
        |   |   |   |  \|----\
T1/R: ..w.s-|---|---|---S=====w=7=V
        |   |   |   |   |     |
T2/R: ..a...w.s-|---|---S=6=V.a....
        |  /    |   |         |
T3/W: ..a.a.....w.t-|-4---5-*.a....
        | |    /    |       | |
T4/R: ..a.a...a.....w.s-----S=w=8=V

current(a0e897b): T3/W timeout(*) and T0/W exclusive-unlock(U) are in race condition, because both T0/Step2-3 and T3/Step4-5 wait same steps.

        p1  p2  p3  p4
T0/W: T=a=1=a=2=a=3=a=U..........
        |   |   |    \|----\
T1/R: ..w.s-|---|-----S=====w=7=V
        |   |   |     |     |
T2/R: ..w.s-|---|-----S=6=V.a....
        |   |   |           |
T3/W: ..a...w.t-|-----4-5-*.a....
        |  /    |         | |
T4/R: ..a.a.....w.s-------S=w=8=V
@yohhoy

This comment has been minimized.

Owner

yohhoy commented Feb 19, 2018

https://ci.appveyor.com/project/yohhoy/yamc/build/1.0.134/job/k7o3gc4o8d0w6733

OK: try_lock_{for,until}/timeout -> unlock -> lock_shareds

  lock_shared/wait [L1:E, R:s, R:s, R:e, R:s]
<<try_lockwait/false [L1:E, R:s, R:s, R:s]
>>unlock [L1:E, R:s, R:s, R:s]
<<unlock [R:S, R:S, R:S]        // OK: mark (continuously) subsequent shared-lock
<<lock_shared [L1:S, R:S, R:S]
...(cont)

OK/AnotherScenario: unlock -> lock_shareds -> try_lock_{for,until}/timeout

  lock_shared/wait [L1:E, R:s, R:s, R:e, R:s]
>>unlock [L1:E, R:s, R:s, R:e, R:s]
<<unlock [R:S, R:S, R:e, R:s]     // mark continuously subsequent shared-locks
<<lock_shared [L1:S, R:S, R:e, R:s]
<<lock_shared [L2:S, R:e, R:s]
<<try_lockwait/false [L2:S, R:S]  // OK: marge shared-lock group
<<lock_shared [L3:S]
...(cont)

NG: unlock -> try_lock_{for,until}/timeout-> lock_shareds

  lock_shared/wait [L1:E, R:s, R:s, R:e, R:s]
>>unlock [L1:E, R:s, R:s, R:e, R:s]
<<unlock [R:S, R:S, R:e, R:s]         // mark continuously subsequent shared-locks
<<try_lockwait/false [R:S, R:S, R:s]  // NG: not merged, no cv signal(!)
<<lock_shared [L1:S, R:S, R:s]
T0/W: T=a=1=a=2=a=3=a=U............
        |   |   |    \|----\
T1/R: ..w.s-|---|-----S=====w(=7=V)  wait for p4
        |   |   |     |     |
T2/R: ..w.s-|---|-----S=6=V.a.....
        |   |   |           |
T3/W: ..a...w.t-|-4-5-*.....a....
        |  /    |           |
T4/R: ..a.a.....w.s------(s=w=8=V)  wait for cv
while (queue_.next != &request) {
if (cv_.wait_until(lk, tp) == std::cv_status::timeout) {
if (queue_.next == &request) // re-check predicate
break;
if (request.prev == &locked_ && (locked_.status & node_status_mask) == 3) {
//
// Phase-marging: If previous node represents current shared-lock,
// When exclusive-lock are timeouted and previous shared-lock own lock,
// mark subsequent 'waiting' shared-lock nodes as 'lockable'.

This comment has been minimized.

@yohhoy

yohhoy Feb 20, 2018

Owner

request.prev == &locked_ (owned shared-lock) is over-constraint.
It's enough to check (request.prev.status & node_status_mask) == 3 (owned/'locking' or 'lockable' shared-lock) only.

p = p->next;
if (RwLockFairness::phased) {
// PhaseFairness: mark all queued shared-lock nodes
for (node* p = request.next; p != &queue_; p = p->next) {

This comment has been minimized.

@yohhoy

yohhoy Feb 20, 2018

Owner

Even if phase-fairness policy, do marge previous/next shared-lock nodes that mark immediately subsequent shared-lock nodes group only.
Do not mark ALL shared-lock requests, because it may cause Writer Starvation.

yamc::fair::shared_(timed_)mutex bugfix
- deadlock on racy condition between unlock and try_lock_* timeout.
- try_lock_* timeout with PhaseFairness policy overrun subsequent
  shared-lock nodes 'lockable' marking, it causes writer starvation.
@yohhoy

This comment has been minimized.

Owner

yohhoy commented Feb 20, 2018

lock_shared with PhaseFairness policy assertion failure. (tests/perf_rwlock.cpp)

>>unlock [1:E, s, e, s, s, s, s, s, s, s]
<<unlock [S, e, S, S, S, S, S, S, S]  // mark all shared-lock nodes
<<lock_shared [*1:S, e, S, S, S, S, S, S, S]
>>unlock_shared [1:S, e, S, S, S, S, S, S, S]
<<unlock_shared [e, S, S, S, S, S, S, S]
<<lock [*1:E, S, S, S, S, S, S, S]  // NG: wrong exclusive-lock
>>lock_shared [1:E, S, S, S, S, S, S, S]
Assertion failed: ((locked_.status & node_status_mask) == (mode | 2)),
  function wq_push_locknode

@yohhoy yohhoy merged commit 9e4169f into master Feb 24, 2018

4 checks passed

continuous-integration/appveyor/branch AppVeyor build succeeded
Details
continuous-integration/appveyor/pr AppVeyor build succeeded
Details
continuous-integration/travis-ci/pr The Travis CI build passed
Details
continuous-integration/travis-ci/push The Travis CI build passed
Details

@yohhoy yohhoy deleted the fair-shared-mtx branch Feb 27, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment