New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
txn: Do constraint check when handling repeated acqurie_pessimsitic_lock request #14037
txn: Do constraint check when handling repeated acqurie_pessimsitic_lock request #14037
Conversation
Signed-off-by: MyonKeminta <MyonKeminta@users.noreply.github.com>
Signed-off-by: MyonKeminta <MyonKeminta@users.noreply.github.com>
[REVIEW NOTIFICATION] This pull request has been approved by:
To complete the pull request process, please ask the reviewers in the list to review by filling The full list of commands accepted by this bot can be found here. Reviewer can indicate their review by submitting an approval review. |
/release |
if let Some((write, commit_ts)) = write { | ||
// Here `get_write_with_commit_ts` returns only the latest PUT if it exists and | ||
// is not deleted. It's still ok to pass it into `check_data_constraint`. | ||
if locked_with_conflict_ts.is_none() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add a comment about why locked_with_conflict_ts
cases can be skipped? I cannot think of any problem but I also want to hear how you think about it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be a problem for this case:
- Raise an
Insert
pessimistic lock - Locking is successful with conflict and the constraint check is not done
- Response of 2 is lost, the client retry
- The
locked_with_conflict_ts
isSome
and the constraint check is skipped unexpectedly
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It acquires the lock and lets the client (TiDB) retry the statement in this case. TiDB will find the key already exists when it retries the statement. It's a part of avoiding the key being locked by another transaction when the current transaction does a statement retry, making the retry a waste.
But actually I want to adjust this behavior later: When should_not_exist
is set and the key exists and the latest version is newer than for_update_ts, return write conflict error even allow_lock_with_conflict
is set. I think in most cases, when the statement retries, it's likely that it still want to insert the same key and it should still fail.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be a problem for this case:
- Raise an
Insert
pessimistic lock- Locking is successful with conflict and the constraint check is not done
- Response of 2 is lost, the client retry
- The
locked_with_conflict_ts
isSome
and the constraint check is skipped unexpectedly
In the current implementation (which I'm planning to adjust later), the retried request in the 4th step will still produce a locked_with_conflict
result, and TiDB will still retry. It will notice the key already exists when retrying the statement and abort the statement then, at which time the lock will be pessimistic_rollback
-ed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it. So the correctness depends on that the returned pessimistic lock result must contain "value existing" information, maybe we could add this in the comment too in case we forget to return the required information?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Value
field is required in PessimisticLockKeyResult::LockedWithConflict
. I think it's just fine for locked_with_conflict cases? I'm actually not very sure where you prefer to add the comment you said
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, it's fine.
Signed-off-by: MyonKeminta <MyonKeminta@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
/merge |
@MyonKeminta: It seems you want to merge this PR, I will help you trigger all the tests: /run-all-tests You only need to trigger If you have any questions about the PR merge process, please refer to pr process. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/tichi repository. |
/merge |
@MyonKeminta: It seems you want to merge this PR, I will help you trigger all the tests: /run-all-tests You only need to trigger If you have any questions about the PR merge process, please refer to pr process. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/tichi repository. |
This pull request has been accepted and is ready to merge. Commit hash: 6824bba
|
@MyonKeminta: Your PR was out of date, I have automatically updated it for you. If the CI test fails, you just re-trigger the test that failed and the bot will merge the PR for you after the CI passes. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/tichi repository. |
In response to a cherrypick label: new pull request created to branch |
close tikv#14038, close pingcap/tidb#40114 Signed-off-by: ti-chi-bot <ti-community-prow-bot@tidb.io>
close tikv#14038, close pingcap/tidb#40114 Signed-off-by: ti-chi-bot <ti-community-prow-bot@tidb.io>
In response to a cherrypick label: new pull request created to branch |
In response to a cherrypick label: new pull request created to branch |
close tikv#14038, close pingcap/tidb#40114 Signed-off-by: ti-chi-bot <ti-community-prow-bot@tidb.io>
In response to a cherrypick label: new pull request created to branch |
…ock request (#14037) (#14049) ref #14037, close #14038, close pingcap/tidb#40114 Fixes the problem that when handling repeated acquire_pessimistic_lock requests is recevied, should_not_exist is ignored. TiKV provides idempotency for these RPC requests, but for acquire_pessimistic_lock, it ignored the possibility that the client may expect a pessimistic_rollback between two acquire_pessimistic_lock request on the same key. In this case the second request may come from another statement and carries `should_not_exist` that wasn't set in the previously finished pessimistic lock request. If the first request successfully acquired the lock and the pessimistic_rollback failed, TiKV may return a sucessful response, making the client believe that the key doesn't exist before. In some rare cases, this has risk to cause data inconsistency. Signed-off-by: ti-chi-bot <ti-community-prow-bot@tidb.io> Signed-off-by: MyonKeminta <MyonKeminta@users.noreply.github.com> Co-authored-by: MyonKeminta <9948422+MyonKeminta@users.noreply.github.com> Co-authored-by: MyonKeminta <MyonKeminta@users.noreply.github.com>
…ock request (#14037) (#14050) ref #14037, close #14038, close pingcap/tidb#40114 Fixes the problem that when handling repeated acquire_pessimistic_lock requests is recevied, should_not_exist is ignored. TiKV provides idempotency for these RPC requests, but for acquire_pessimistic_lock, it ignored the possibility that the client may expect a pessimistic_rollback between two acquire_pessimistic_lock request on the same key. In this case the second request may come from another statement and carries `should_not_exist` that wasn't set in the previously finished pessimistic lock request. If the first request successfully acquired the lock and the pessimistic_rollback failed, TiKV may return a sucessful response, making the client believe that the key doesn't exist before. In some rare cases, this has risk to cause data inconsistency. Signed-off-by: MyonKeminta <MyonKeminta@users.noreply.github.com> Co-authored-by: MyonKeminta <MyonKeminta@users.noreply.github.com>
What is changed and how it works?
Issue Number: Close #14038
Close pingcap/tidb#40114
What's Changed:
Related changes
pingcap/docs
/pingcap/docs-cn
:Check List
Tests
Side effects
Release note