Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Retry helm installs #21

merged 1 commit into from Feb 10, 2018


None yet
2 participants
Copy link

commented Feb 9, 2018

I saw one deployment where helm failed because an endpoint on the k8s infrastructure wasn't reachable. I'm wondering if we can harden the helm installs with this retry. Thoughts, @jmspring ?

@c-w c-w requested a review from jmspring Feb 9, 2018


This comment has been minimized.

Copy link

commented Feb 9, 2018

This looks ok and should solve the problem. Another thought would be to use kubectl to check the status of the k8s infrastructure? You'd need to make sure tiller is running. Another thought - should the retry be infinite?


This comment has been minimized.

Copy link
Member Author

commented Feb 10, 2018

We're already using kubectl to defer all helm calls until tiller is ready:

echo "Waiting for Tiller pod to get ready"
while ! (kubectl get po --namespace kube-system | grep -i 'tiller' | grep -i 'running' | grep -i '1/1'); do
echo "Waiting for Tiller pod"
sleep 10s

Are there other things we can/should verify?

@c-w c-w merged commit 0011287 into master Feb 10, 2018

2 checks passed

continuous-integration/travis-ci/pr The Travis CI build passed
continuous-integration/travis-ci/push The Travis CI build passed

@c-w c-w deleted the helm-retry branch Feb 10, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.