Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Retry helm installs #21

Merged
merged 1 commit into from Feb 10, 2018
Merged

Retry helm installs #21

merged 1 commit into from Feb 10, 2018

Conversation

@c-w
Copy link
Contributor

@c-w c-w commented Feb 9, 2018

I saw one deployment where helm failed because an endpoint on the k8s infrastructure wasn't reachable. I'm wondering if we can harden the helm installs with this retry. Thoughts, @jmspring ?

@c-w c-w requested a review from jmspring Feb 9, 2018
@jmspring
Copy link
Contributor

@jmspring jmspring commented Feb 9, 2018

This looks ok and should solve the problem. Another thought would be to use kubectl to check the status of the k8s infrastructure? You'd need to make sure tiller is running. Another thought - should the retry be infinite?

@c-w
Copy link
Contributor Author

@c-w c-w commented Feb 10, 2018

We're already using kubectl to defer all helm calls until tiller is ready:

echo "Waiting for Tiller pod to get ready"
while ! (kubectl get po --namespace kube-system | grep -i 'tiller' | grep -i 'running' | grep -i '1/1'); do
echo "Waiting for Tiller pod"
sleep 10s
done

Are there other things we can/should verify?

@c-w c-w merged commit 0011287 into master Feb 10, 2018
2 checks passed
@c-w c-w deleted the helm-retry branch Feb 10, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Linked issues

Successfully merging this pull request may close these issues.

None yet

2 participants