-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
core: added mutex lock #23
core: added mutex lock #23
Conversation
It seems that tests are not enabled in the CI. I ran them locally:
Uncovered lines for For |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have re-enabled the CI tests
invenio_cache/lock.py
Outdated
if self.acquired: | ||
success = self._cache.delete(self.lock_id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the lock was not acquired for some reasons, we might want to return True
or to fail:
if self.acquired: | |
success = self._cache.delete(self.lock_id) | |
if self.acquired: | |
success = self._cache.delete(self.lock_id) | |
else: | |
success = True |
Otherwise, it might return False and you don't know what happened.
WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Re-iterating on this: I would remove all if self.aquired
. You need to be sure to pass the CachedMutex
instance to release. I would rely exclusively on the existence of the cache key instead.
What is the use case? Multiprocessing context?
My worry is the following: if improperly used, it might happen that I acquire in one process (one web node), and try to release on another process (another web node) later one. This will fail, unless the same instance of CachedMutex
is shared.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with that.
However, the lock might be released by any thread right now.
E.g.
# Thread 1
lock = Mutex("id1")
lock.acquire()
# Thread 2
lock = Mutex("id1")
lock.release()
We could "sign" these locks with a unique ID (e.g. set the value to a UUID) and only allow the release / renewal if the thread passes the same uuid. We briefly talked about it but did not elaborate on it.
closes inveniosoftware/invenio-users-resources#102