txn: only wake up waiters when locks are indeed released (#7379) #7549
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
cherry-pick #7379 to release-3.0
Signed-off-by: youjiali1995 zlwgx1023@gmail.com
What problem does this PR solve?
Problem Summary:
TiKV will wake up waiters as long as it receives requests that may release locks, e.g.,
pessimistic_rollback
,rollback
,commit
. If a request doesn't release locks, typically the lock doesn't exist, it needn't wake up waiters.In TiDB, if a pessimistic DML meets write conflict, it will use
pessimistic_rollback
to clean up all locks it needs to lock in this DML and then retry the DML. If a transaction is waked up and there are other transactions waiting for the lock, these transactions will be waked up bypessimistic_rollback
one by one. It dramatically affects performance and results in useless retry.What is changed and how it works?
What's Changed:
Only wake up waiters when locks are indeed released and small refactor.
Related changes
Check List
Tests
I think existing tests are enough and I benched it using sysbench with the workload below:
master:
This PR:
Release note
Fix the issue that needless wake-up results in useless retry and performance reduction in heavy contention workloads.