-
-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable automatic reload upon source code changes #2
Comments
adds cancel method so jobs can be removed from the scheduler queue
This would be a nice feature indeed +1 |
i support this |
Can you manually send a signal to reload a worker? |
One solution for this would be to write a script that runs workers in Example:
Of course it won't work reliably if you have long monolithic jobs, but in that case you have a bigger problem than reloading your workers ;) |
Any update on this? |
No one is working on this as far as I know. It would be nice to have this feature though. Django uses watchman to trigger reloads. I'd accept a PR that implements live reload functionality :) |
Thanks for the update. I'll see if I find the time to submit a PR. For now just went with using entr with ack (doesn't work if you add new files though): ack -f | entr -r -s "rq worker -u redis://redis:6379" |
Was going through the code and I couldn't really see why you'd have to reload. Aren't functions "imported" using getattr during execution: Lines 191 to 200 in 549648b
I tested this real-quick by starting a worker where the job initially made a file "old" and issued a job (the file "old" was made). Then I deleted that file, changed the filename to "new", and issued another job (without restarting the worker). A "new" file was made. Am I missing something? |
This is how I'm currently autoreloading the worker process. This is a management command I called devrqworker, so instead of running
|
There's a package called django-rq-wrapper that implements autoreload and start multiple workers. I didn't test it, but seems pretty simple and not evasive since it brings a new management command called "rqworkers". |
@honi THanks. Works well. PSA: Don't use it in production. This eats HELLA memory. Like 100 mb (regular rq worker) vs 300 mb (this custom version) |
Correct! In production you should use the default rqworker management command. |
Like in #2 (comment) I made a simple test and it worked. Can someone send me an example so I can look at it. |
When you What the worker does is basically just call the If anyone has any issues with this, feel free to open a new issue so we can investigate. Closing it for now. |
Python will cache modules that have already been imported, and the default worker will fork new processes when executing tasks, so it will not be affected by module caching. If you use simple worker, there will be problems when making code changes |
This shouldn't be too fancy, but it is extremely useful to avoid making mistakes where you change some of your job function's code and forget to reload your worker.
The text was updated successfully, but these errors were encountered: