-
-
Notifications
You must be signed in to change notification settings - Fork 927
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"Reentrant" version, that allows a celery.bin.celery.main
worker to run multiple times
#866
Conversation
Allows the same Python process to bring a new Worker after a shutdown (SystemExit) rescued. See: https://botbot.me/freenode/celery/2018-04-16/?msg=99046233&page=2
Codecov Report
@@ Coverage Diff @@
## master #866 +/- ##
==========================================
+ Coverage 86.26% 86.27% +0.01%
==========================================
Files 63 63
Lines 6485 6491 +6
Branches 768 769 +1
==========================================
+ Hits 5594 5600 +6
Misses 812 812
Partials 79 79
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for the patch, is it possible to improve the coverage of test?
To raise the coverage, I can think of a case closing the Redis pooler an then trying to reuse it again. Will take a look on what file it would fit... |
@auvipy I am trying to bake a test case for "Redis got closed then reused again", but could not find a way: The actual tests never set I could use some help here, if possible. |
Looks like Travis failed only on Python 3.4 and pypy, for some unknown reason. Can you please issue a retry there please? |
@alanjds failed builds restarted. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
The actual version does not allow
celery.bin.celery.main
to run, raise a SystemExit, catch it and then run again. (See: alanjds/celery-serverless@a23f79d#diff-b4826aeb276ca699cf1adda1903fe3eaR55)To allow it, the
kombu.async.hub.Hub.poller
became a @Property that regenerates on access. And the Redis backend tries to recreate it before acessing, if needed.Indeed,
celery.worker.state.should_{stop,terminate}
should be reset after every SystemExit for it to work, but this part is a patch not for Kombu repo ;) .(See: alanjds/celery-serverless@a23f79d#diff-b4826aeb276ca699cf1adda1903fe3eaR57)