-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
helm upgrade --recreate-pods is deleting all the pods, because of this we are having application downtime. is there any alternative command to upgrade our latest image without downtime. #5218
Comments
You can solve it with a workaround by adding an annotation to pods. Every time you need to restart the pods, change the value of that annotation. A good annotation could be Edit: change the annotation by |
Hi @kcatstack Thank you very much for responding to my problem, COuld you please send me the full command. We are using below command, COuld you please make changes as per ur suggestion. That will be great help for us. helm upgrade --recreate-pods ecf-helm-satellite-qa . --set=image.tag=qa-helm-image --debug. we need to update image without downtime. |
First, add an annotation to the pod. If your chart is of kind
Deploy that. Now, every time you |
We have used It also requires removal of --recreate-pods to achieve the desired rollingUpdate strategy |
I'm going to close this as answered but please reopen if this doesn't fully address your use case. Thanks for chiming in @timm088 and @kcatstack! |
@timm088 and @kcatstack thank you very much for ur help, it was nice explanation. cheers |
If using helm v3+, use |
@timm088 @anlsergio Thanks for solution it worked for me. |
helm upgrade --recreate-pods is deleting all the pods, because of this we are having application downtime. is there any alternative command to upgrade our latest image without downtime.
we are using same tag for all the image builds, so everytime it should pull the image:latest. also changed the imagepullpolicy as always in deployment yaml.
Please suggest, it is blocking us
helm version
Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-07-26T20:40:11Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.11-eks", GitCommit:"6bf27214b7e3e1e47dce27dcbd73ee1b27adadd0", GitTreeState:"clean", BuildDate:"2018-12-04T13:33:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Cloud Provider/Platform : AKS
The text was updated successfully, but these errors were encountered: