You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In order to fix #95 and #27 there is a need of some locking mechanism for a given key. This way we can avoid race conditions and also queue up queries to database if we know there is already one query with the same key going on.
After some reading, optimistic locking looks like a good approach on this. We can let the client deal with the exception in case of a race condition. With this solution, we will still need an adhoc one for solving #27.
Solutions for each backend:
memcached
CAS token as suggested in #95 looks good. aiomcache seems it does not support it... will need to implement it there or send the command directly -> PR open aio-libs/aiomcache#33
In order to fix #95 and
#27there is a need of some locking mechanism for a given key. This way we can avoid race conditions and also queue up queries to database if we know there is already one query with the same key going on.After some reading, optimistic locking looks like a good approach on this. We can let the client deal with the exception in case of a race condition. With this solution, we will still need an adhoc one for solving #27.
Solutions for each backend:
memcached
CAS token as suggested in #95 looks good. aiomcache seems it does not support it... will need to implement it there or send the command directly -> PR open aio-libs/aiomcache#33
redis
Use WATCH to reproduce the CAS token: https://redis.io/topics/transactions#optimistic-locking-using-check-and-set.
Need to find a way to reuse the same connection when using it. The granularity is by connection so if another one is used it won't work.
memory
EDIT: Ignore previous, its easier if we use the distributed lock that was used to implement Redlock. Straightforward use
The text was updated successfully, but these errors were encountered: