-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Thundering herd avoidance doesn't wait the full herd_avoidance_timeout
#6
Comments
I was getting spurious errors in tests as a result of a failed call to I don't know how to write a test that can simulate this - presumably you need multiple processes running at once. Based on the link here, and some others I am assuming that we are using either redis or memcached, and if we aren't then we are running tests and there is a single process anyway so nothing else will try and run the task while we are On that basis, this seems to work: @contextmanager
def acquire_lock(lock_name, timeout=900):
"""
A contextmanager to wait until an exclusive lock is available,
hold the lock and then release it when the code under context
is complete.
Attempt to use lock and unlock, which will work if the Cache is Redis,
but fall back to a memcached-compliant add/delete approach.
See:
- http://loose-bits.com/2010/10/distributed-task-locking-in-celery.html
- http://celery.readthedocs.org/en/latest/tutorials/task-cookbook.html#ensuring-a-task-is-only-executed-one-at-a-time
"""
try:
redis = cache.client.client
have_lock = False
lock = redis.lock(lock_name, timeout=timeout)
try:
have_lock = lock.acquire(blocking=True)
if have_lock:
yield
finally:
if have_lock:
lock.release()
except AttributeError:
have_lock = False
try:
while not have_lock:
have_lock = cache.add(lock_name, 'locked', timeout)
if have_lock:
yield
finally:
if have_lock:
cache.delete(lock_name) |
Thanks, rhunwicks! This looks like a solid implementation for Redis and Memcached. I'm not sure how to test this, either. Hrm. Async testing is hard. |
We've been using it in production for almost a year - do you want a PR on it without a test - I think it's probably more useful than the existing implementation. |
At this point, that's probably the pragmatic option, yup. |
The
acquire_lock
implementation is brain-dead. Make it do what it looks like it should be doing.Maybe take a look at: http://loose-bits.com/2010/10/distributed-task-locking-in-celery.html
The text was updated successfully, but these errors were encountered: