-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
txn: only wake up waiters when locks are indeed released #7379
txn: only wake up waiters when locks are indeed released #7379
Conversation
Signed-off-by: youjiali1995 <zlwgx1023@gmail.com>
/release |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM except a small question.
If the DML only need to lock one key, it won't send |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, a few comments inline, none are major
src/storage/txn/process.rs
Outdated
} | ||
} | ||
|
||
fn push(&mut self, lock: ReleasedLock) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could implement a from_iter
method to build the hashes Vec
in one go, rather than using for
loops and push
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I don't get it. I still need to push hash to a vec to get the iter..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So for example, rather than use
for k in keys {
released_locks.push(txn.rollback(k)?);
}
you can use
released_locks.hashes(keys.iter().map(|k| txn.rollback(k)))?;
where hashes
is something like:
fn hashes<I, T, E>(&mut self, iter: I) -> Result<(), E>
where
I: Iterator<Item = Result<T, E>>
{
self.hashes = iter.collect()?;
Ok(())
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried to implement it but I found it's more complex than for
loop:
impl ReleasedLocks {
fn hashes<I, E>(&mut self, iter: I) -> std::result::Result<(), E>
where
I: Iterator<Item = std::result::Result<Option<ReleasedLock>, E>>,
{
self.hashes = iter
.filter_map(|v| match v {
Ok(Some(lock)) => {
if !self.pessimistic {
self.pessimistic = lock.pessimistic;
}
Some(Ok(lock.hash))
}
Ok(None) => None,
Err(e) => Some(Err(e)),
})
.collect::<std::result::Result<Vec<u64>, E>>()?;
Ok(())
}
}
fn process_write_impl(...) {
...
let mut released_locks = ReleasedLocks::new(lock_ts, commit_ts);
// for k in keys {
// released_locks.push(txn.commit(k, commit_ts)?);
// }
released_locks.hashes(keys.into_iter().map(|k| txn.commit(k, commit_ts)))?;
released_locks.wake_up(lock_mgr.as_ref());
...
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And I still need to call push
when processing ResolveLock
..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@nrc PTAL again. Thanks!
Signed-off-by: youjiali1995 <zlwgx1023@gmail.com>
Signed-off-by: youjiali1995 <zlwgx1023@gmail.com>
2095388
to
1641dbb
Compare
/run-all-tests |
/bench +tpcc |
Benchmark Report
@@ Benchmark Diff @@
================================================================================
tidb: c1a31708b0a3eb8e75adbc5cf75c86926fcf4d1b
--- tikv: 2b1b9b2cd537e47591795dc034b59a0585cfe7a8
+++ tikv: 1641dbb41273c50b7a0b630295d65fc9e722076e
pd: 8438f3fc004da1bff7442229e53fe4272f74ce2d
================================================================================
Measured tpmC (NewOrders): 8296.83 ± 1.98% (std=106.77), delta: -2.84% (p=0.036) |
/bench +tpcc |
…p-when-lock-not-exist Signed-off-by: youjiali1995 <zlwgx1023@gmail.com>
1641dbb
to
58feb08
Compare
Benchmark Report
@@ Benchmark Diff @@
================================================================================
tidb: c1a31708b0a3eb8e75adbc5cf75c86926fcf4d1b
--- tikv: 7935019849d46b7d32b1a6b0d14e795cd7da1591
+++ tikv: 1641dbb41273c50b7a0b630295d65fc9e722076e
pd: 8438f3fc004da1bff7442229e53fe4272f74ce2d
================================================================================
Measured tpmC (NewOrders): 7007.72 ± 1.01% (std=66.19), delta: -1.77% (p=0.072) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks for the changes!
@MyonKeminta PTAL |
/run-all-tests |
Signed-off-by: sre-bot <sre-bot@pingcap.com>
cherry pick to release-3.0 in PR #7549 |
Signed-off-by: sre-bot <sre-bot@pingcap.com>
cherry pick to release-3.1 in PR #7550 |
Signed-off-by: sre-bot <sre-bot@pingcap.com>
cherry pick to release-4.0 in PR #7551 |
Signed-off-by: sre-bot <sre-bot@pingcap.com> Signed-off-by: youjiali1995 <zlwgx1023@gmail.com>
Signed-off-by: sre-bot <sre-bot@pingcap.com> Signed-off-by: youjiali1995 <zlwgx1023@gmail.com>
Signed-off-by: youjiali1995 <zlwgx1023@gmail.com>
Signed-off-by: youjiali1995 <zlwgx1023@gmail.com>
Signed-off-by: youjiali1995 <zlwgx1023@gmail.com>
Signed-off-by: youjiali1995 <zlwgx1023@gmail.com>
Signed-off-by: youjiali1995 <zlwgx1023@gmail.com>
Signed-off-by: youjiali1995 <zlwgx1023@gmail.com>
Signed-off-by: sre-bot <sre-bot@pingcap.com> Signed-off-by: youjiali1995 <zlwgx1023@gmail.com>
Signed-off-by: youjiali1995 <zlwgx1023@gmail.com>
txn: only wake up waiters when locks are indeed released (tikv#7379) (tikv#7585) Signed-off-by: youjiali1995 <zlwgx1023@gmail.com> txn: don't protect rollback for BatchRollback (tikv#7605) (tikv#7608) Signed-off-by: youjiali1995 <zlwgx1023@gmail.com> tidb_query: add is true/false keep null ScalarFuncSig (tikv#7532) (tikv#7566) Signed-off-by: zhongzc <zhongzc_arch@outlook.com> tidb_query: fix the logical behavior of floats (tikv#7342) (tikv#7582) Signed-off-by: zhongzc <zhongzc_arch@outlook.com> tidb_query: fix converting bytes to bool (tikv#7486) (tikv#7547) Signed-off-by: zhongzc <zhongzc_arch@outlook.com> raftstore: change the condition of proposing rollback merge (tikv#6584) (tikv#7762) Signed-off-by: Liqi Geng <gengliqiii@gmail.com> Signed-off-by: Tong Zhigao <tongzhigao@pingcap.com>
Signed-off-by: youjiali1995 zlwgx1023@gmail.com
What problem does this PR solve?
Problem Summary:
TiKV will wake up waiters as long as it receives requests that may release locks, e.g.,
pessimistic_rollback
,rollback
,commit
. If a request doesn't release locks, typically the lock doesn't exist, it needn't wake up waiters.In TiDB, if a pessimistic DML meets write conflict, it will use
pessimistic_rollback
to clean up all locks it needs to lock in this DML and then retry the DML. If a transaction is waked up and there are other transactions waiting for the lock, these transactions will be waked up bypessimistic_rollback
one by one. It dramatically affects performance and results in useless retry.What is changed and how it works?
What's Changed:
Only wake up waiters when locks are indeed released and small refactor.
Related changes
Check List
Tests
I think existing tests are enough and I benched it using sysbench with the workload below:
master:
This PR:
Release note
Fix the issue that needless wake-up results in useless retry and performance reduction in heavy contention workloads.