-
Notifications
You must be signed in to change notification settings - Fork 70
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix/issue 684 #820
base: master
Are you sure you want to change the base?
Fix/issue 684 #820
Conversation
@Phoenix500526 Convert your pr to draft since CI failed |
284e26d
to
bddfaab
Compare
@Phoenix500526 You've modified the workflows. Please don't forget to update the .mergify.yml. |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #820 +/- ##
==========================================
+ Coverage 75.55% 75.62% +0.07%
==========================================
Files 180 185 +5
Lines 26938 27761 +823
Branches 26938 27761 +823
==========================================
+ Hits 20353 20995 +642
- Misses 5366 5469 +103
- Partials 1219 1297 +78 ☔ View full report in Codecov by Sentry. |
}; | ||
let mut lease_client = self.client.lease_client.clone(); | ||
let lease_id = self.lease_id; | ||
self.keep_alive = Some(tokio::spawn(async move { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we notify the user if the keep alive task exit before unlock?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems we can do nothing but panic. Do you have any better solutions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On second thought, I think it's impossible to make sure that the lock is always held during crital section, the user must assume the lock is held.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shall we still need to handle this problem if it's impossible to make sure that the lock is always held during critical section?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we could simply print an error message when the task exits.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We may consider implementing this #820 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently, a user will get an error when he invokes the lock
method on a Xutex
of which keep-alive task exists.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest to use the following API to avoid confusion about how to use the lock:
let xutex = Xutex::new(..).await?;
xutex.put(..).await?;
xutex.range(..).await?;
xutex.delete_range(..).await?;
xutex.txn(..).await?;
let guard = xutex.lock_unsafe().await?;
// perform some unsafe operation that may break consistency
// ...
drop(guard);
We could directly implement safe operations put/range/delete_range/txn
for the type Xutex
, and also implement a lock_unsafe
method to make it compatible with the behavior of the etcd client.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does ‘compatible with the behavior of the etcd client’ mean? Just renaming lock
as lock_unsafe
? In addition, I don't get the point why implementing put/range/delete_range
is less confusing than not implementing them? The Xutex
is a lock wrapper, not a KvClient
wrapper. Therefore, if a user needs to do something while holding a lock, he should use a KvClient
to initiate a request instead of using Xutex
. Considering that, I think providing txn_with_locked_key
is enough. What it does is only to generate a txn request which compares the locked key exist or not.
@Phoenix500526 Convert your pr to draft since CI failed |
1c973bf
to
dcc89ca
Compare
@Phoenix500526 Your PR is in conflict and cannot be merged. |
dcc89ca
to
1a0abbb
Compare
@Phoenix500526 Your PR is in conflict and cannot be merged. |
1a0abbb
to
b848a11
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mutex and its guard may be not the best way to abstract. If there's any error happens, such as lease renew failure, lock free is a good point to report error. In the mutex guard abstraction, lock free is in the drop function, which is not possible to handle for the caller.
|
||
/// An RAII implementation of a “scoped lock” of an `Xutex` | ||
#[derive(Default, Debug)] | ||
pub struct XutexGuard { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Usually mutex guards have a lifetime bound, make sure the mutex itself lives longer than the guards. Shall we follow it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The async_dropper requires the Default trait. If we introduce the lifetime bound for XutexGuard, the async_dropper may not work properly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about we allow the user to do a manual unlock, and the unlock could return some information to the user.
For example:
{
enum UnlockResult {
Ok,
// The lock may has been unlocked by the server
Unsafe(Err),
}
let guard = xutex.lock_unsafe().await?;
let result: UnlockResult = guard.unlock().await?;
match result {
// handles the lock result
// ...
}
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Manually unlocking may not be a good practice since the likelihood of a user forgetting to unlock is greater than the likelihood of a panic occurring.
But if you don't provide an RAII implementation for the lock, users may be bothered by the fact that they may forget to free a lock. |
No, they won't if you put operations dealing with the locked data in a closure and release the lock while leaving the closure. For example providing the following code: let return_value = a_xutex.map_lock( |xutex_guard| {
/// dealing with the guard
}); The return_value can tell if there's any lock related issue during the closure. Additionally if you follow this way, the life time bound of the guard is not necessary as there's only one way to get the guard and drop it. |
e0ddaf9
to
5c535e6
Compare
@Phoenix500526 Convert your pr to draft since CI failed |
70d0524
to
7decae5
Compare
@Phoenix500526 Your PR is in conflict and cannot be merged. |
7decae5
to
89fa23e
Compare
@Phoenix500526 Convert your pr to draft since CI failed |
89fa23e
to
3be34a1
Compare
We could guarantee the lock safety by coupling the lock key to every update send to Xline, and the Xline server must verify the validity of the key. Please refer to https://jepsen.io/analyses/etcd-3.4.3 On the client side, we could associate KV operation methods to the lock guard to prevent user from using the lock for other purpose. |
@Mergifyio rebase |
Closes: xline-kv#664 Signed-off-by: Phoeniix Zhao <Phoenix500526@163.com>
Closes: 664,684 Signed-off-by: Phoeniix Zhao <Phoenix500526@163.com>
Signed-off-by: Phoeniix Zhao <Phoenix500526@163.com>
✅ Branch has been successfully rebased |
3be34a1
to
32fa940
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
Please briefly answer these questions:
Close the issue #664 and #684
what problem are you trying to solve? (or if there's no problem, what's the motivation for this change?)
Close the issue [Bug]: The xlinectl won't renew the lease of the lock key #664 and [Refactor]: Add a session structure to renew lock lease automatically #684
what changes does this pull request make?
Implement a session structure to auto renew the lock lease.
Implement an
Xutex
, which means Xline Mutex, to describe a lock instanceProvide an RAII implementation
XutexGuard
forXutex
Remove the LockRequest and UnlockRequest in xline-client
Remove some unless test cases, like
lock_should_timeout_when_ttl_is_set
. Actually, whether the ttl is set or not, the lock in etcd won't timeout. The ttl of a lock is only used to liveness checking. FYI: what is used for about ttl in session? etcd-io/etcd#6736Remove the validation_lock_client.rs
are there any non-obvious implications of these changes? (does it break compatibility with previous versions, etc)
no