Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide mechanism to not respawn if worker hasn't every gotten to ready #18

Closed
humphd opened this issue Nov 6, 2014 · 4 comments
Closed

Comments

@humphd
Copy link

humphd commented Nov 6, 2014

If I have a worker that is misconfigured (e.g., missing an env setting), and it never gets to the ready state, it would be nice if the entire cluster could shutdown, instead of going into an endless loop trying and failing to start the workers. If the cluster master could be told to listen for a worker to get to the ready state at least once, then we'd know it's probably useful to respawn. If, however, we don't get to ready, it's probably a good sign that respawning isn't going to help much.

I can see times where you might have workers that you want to keep kicking, so maybe this could be optional, opts.respawnExpectsReady or the like.

@spion
Copy link
Contributor

spion commented Nov 7, 2014

Could you elaborate more on your use case? Would this apply only the first time the master process is run?

In some of the cases the problem can be fixed without rerunning the master e.g. failing to parse a configuration file with a typo or to load a missing module. A large enough backoff value will quickly cause the respawns to be far enough between.

Looking at the code I noticed that opt.backoff is by default undefined, which will cause all restart timers to fire quickly all the time (every second by default). Perhaps this will be solved by providing default backoff and restart values that are saner, like e.g. 60s? That way instead of a resource-intensive endless loop you would get a few fast retries, but recluster will quickly back off to occasional retries every minute, which might be an acceptable option.

@humphd
Copy link
Author

humphd commented Nov 7, 2014

I have a cluster master that is spinning up server forks. If these forks never get to ready, instead of respawning, I'd like the option to kill the server. In essence, I want an option to say, "only respawn if we ever got to ready." I can see how this is not necessarily something you'd want in the default case. However, I'd argue that if a server never gets to ready, it's unlikely to get to ready ever; whereas a crashed server (one that did get to ready, ran, and hit some kind of error) is one that should be restarted.

Does that make sense? It isn't about the time period between respawns so much as having a threshold beyond which we respawn, but before which we shut down.

@spion
Copy link
Contributor

spion commented Nov 7, 2014

I think I understand now. If an error such as that occurs, I usually want to revert to an earlier version of the main program asap and let the automatic restart pick it up - and only in the special case where I don't want that (because for example env variables need to change) I'd provide a way to call .terminate() manually (e.g. via SIGHUP). then start the master again

However, your approach is an equally valid alternative. I'll try to find some time this weekend to give it a try. I'd also welcome a pull request, if you have the time for it :)

@spion
Copy link
Contributor

spion commented Sep 10, 2015

Closing with wontfix - the recommended solution is to take advantage of the backoff option to avoid wasting resources and kill the process externally if its not possible to solve the problem by swapping the code (e.g. because the problem is missing environment variables)

@spion spion closed this as completed Sep 10, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants