-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide mechanism to not respawn if worker hasn't every gotten to ready #18
Comments
Could you elaborate more on your use case? Would this apply only the first time the master process is run? In some of the cases the problem can be fixed without rerunning the master e.g. failing to parse a configuration file with a typo or to load a missing module. A large enough backoff value will quickly cause the respawns to be far enough between. Looking at the code I noticed that |
I have a cluster master that is spinning up server forks. If these forks never get to Does that make sense? It isn't about the time period between respawns so much as having a threshold beyond which we respawn, but before which we shut down. |
I think I understand now. If an error such as that occurs, I usually want to revert to an earlier version of the main program asap and let the automatic restart pick it up - and only in the special case where I don't want that (because for example env variables need to change) I'd provide a way to call .terminate() manually (e.g. via SIGHUP). then start the master again However, your approach is an equally valid alternative. I'll try to find some time this weekend to give it a try. I'd also welcome a pull request, if you have the time for it :) |
Closing with wontfix - the recommended solution is to take advantage of the backoff option to avoid wasting resources and kill the process externally if its not possible to solve the problem by swapping the code (e.g. because the problem is missing environment variables) |
If I have a worker that is misconfigured (e.g., missing an env setting), and it never gets to the
ready
state, it would be nice if the entire cluster could shutdown, instead of going into an endless loop trying and failing to start the workers. If the cluster master could be told to listen for a worker to get to the ready state at least once, then we'd know it's probably useful to respawn. If, however, we don't get to ready, it's probably a good sign that respawning isn't going to help much.I can see times where you might have workers that you want to keep kicking, so maybe this could be optional,
opts.respawnExpectsReady
or the like.The text was updated successfully, but these errors were encountered: