Distributed reader-writer lock for python using redis as server
- Reader-writer lock (can have multiple readers or one exclusive writer)
- Stale locks collected (run as separate process,
python3 -m redisrwlock)
- Deadlock detection
Note: Deadlock detection and garbage/staleness collection is done in
client side, which can cause excessive I/O with redis server. Tune
retry_interval and consider running the stale lock collection
appropriately for your purpose.
- python 3.5.2
- redis-py 2.10.5
- redis 3.2.6
- [test] Coverage.py 4.2
pip install redisrwlock
Try lock with timeout=0
With timeout=0, RwlockClinet.lock acts as so called try_lock.
from redisrwlock import Rwlock, RwlockClient client = RwlockClient() rwlock = client.lock('N1', Rwlock.READ, timeout=0) if rwlock.status == Rwlock.OK: # Processings of resource named 'N1' with READ lock # ... client.unlock(rwlock) elif rwlock.status == Rwlock.FAIL: # Retry locking or quit
Waiting until lock success or deadlock
With timout > 0, RwlockClient.lock waits until lock successfully or deadlock detected and caller is chosen as victim.
from redisrwlock import Rwlock, RwlockClient client = RwlockClient() rwlock = client.lock('N1', Rwlock.READ, timeout=Rwlock.FOREVER) if rwlock.status == Rwlock.OK: # Processings of resource named 'N1' with READ lock # ... client.unlock(rwlock) elif rwlock.status == Rwlock.DEADLOCK: # 1. unlock if holding any other locks # 2. Retry locking or quit
Removing stale locks
When a client exits without unlock, redis keys for the client's locks remain in server and block other clients from successful locking. redisrwlock run in command line removes such garbage locks, waits in server.
python3 -m redisrwlock python3 -m redisrwlock --repeat --interval 10 python3 -m redisrwlock --server localhost --port 7777
There are several options for command line execution:
-r, --repeat repeat gc periodically, interval is given by -i or --interval -i, --interval interval of the periodic gc in seconds (default 5) -s, --server redis-server host to connect (default localhost) -p, --port redis-server port to connect (default 6379)
Runnig unittest in test directory:
cd test python3 -m unittest -q
or in project top directory:
python3 -m unittest discover test -q
Examples below are assuming you run unittest in project top directory.
coverage erase coverage run -a -m unittest discover test -q coverage html
Above simple coverage run will report lower coverage than expected because the tests use subprocess. Codes run by subprocess are not covered in report by default.
Need some preperation:
Edit sitecustomize.py (under python intallation's site-packages directory), add 2 lines
import coverage coverage.process_startup()
Edit .coveragerc (default name of coverage.py's config file)
[run] branch = True # To avoid seldom "JSONDecodeError: extra data" parallel = True [html] directory = htmlcov
Then, run coverage with environment variable
coverage erase COVERAGE_PROCESS_START=.coveragerc coverage run -m unittest discover test -q coverage combine && coverage html
- TODO: high availability! redis sentinel or replication?