Skip to content
This repository has been archived by the owner on Sep 14, 2021. It is now read-only.

When launching multiple VMs and one fails, only kill that one. #49

Open
SolomonShorser-OICR opened this issue Oct 23, 2015 · 1 comment

Comments

@SolomonShorser-OICR
Copy link
Member

When the Provisioner launches several VMs in a single batch and ansible fails to provision only one of them (such as SSH timeout when connecting), the entire batch will be killed at the end of the playbook because the playbook returns a non-zero error code even when only one VM fails. This is less than ideal when provisioning takes a while and there are large batches being provisioned at a time.

(was originally created on Consonance, but that was the wrong place: Consonance/consonance#97)

@SolomonShorser-OICR
Copy link
Member Author

Another issue I've discovered here is that the workers that provisioned OK might actually have enough time to pull a job from the queue before they are reaped. So you could have scenarios where your job queue is draining but no work is getting done because the entire fleet is killed when one or two of them fail to provision. Maybe instead of killing the fleet at the end, would it be possible to do it at the beginning when a failure with one node is detected? Ideally, only the failed node should be reaped, but I realize that might be difficult to do (it would probably involve parsing the text output of ansible).

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

1 participant