-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Helm upgrade does not terminate the old POD #7916
Comments
There's not really enough information in your description to know what is going on. But I can at least point you in the right direction. Helm does not terminate pods unless My guess is that in your case, the pod definition did not change from one upgrade to another. Thus Kubernetes did not think it needed to destroy and recreate the pod.
|
@technosophos Thanks for the update.. |
Hey @avinashkumar1289. Without much of a demonstration available, we don't have a way to help you. If you can provide a set of steps using a sample chart scaffolded with |
Hi @bacongobbler So just an update i was able to fix this issue by adding this in deployment.yaml
But basically this is just a work around as this strategy will delete all the existing pod and start them. |
I will re-iterate what I said before: Kubernetes is responsible for determining when to deploy new pods. Helm allows you to bypass Kubernetes by doing If the old pod is there, Kubernetes believes that it is supposed to be there. So something in your deployment.yaml is indicating to it that having multiple pods running for that deployment is fine. I would suggest trying to do this outside of Helm (run a |
Hello avinash, Please find below solution |
@rajendraprasad9 Thanks!! |
@rajendraprasad9 But do you know why it only happens when there is a PVC attached to my POD..Because without the PVC, I have not faced this issue ? |
This issue has been marked as stale because it has been open for 90 days with no activity. This thread will be automatically closed in 30 days if no further activity occurs. |
Thanks for your insights, @rajendraprasad9! @avinashkumar1289 I'm going to resolve this issue now, as we have given the input we can with the available information. Please open a new issue with more detail if desired. |
I suspect this is not an issue with Helm but with the default Kubernetes deployment strategy of RollingUpdate. If the PV/PVC has an accessMode of ReadWriteOnce and the app that is coming up in the new pod needs to write to the volume to pass the readinessProbe, it will not look ready so the old pod will not be destroyed. To solve this, you need to set the deployment strategy to Recreate. |
I am facing an issue with the Helm upgrade..I have a POD that has a PVC (not important but i started facing issue when i have added the PVC) .When i do helm upgrade i can see that the OLD POD is not terminated (is still running ) and the new POD is not able to start as it waits for the PVC which is still bound to the earlier OLD pod/container .
I remember the upgrades used to work properly before adding PVC..
Output of
helm version
: version.BuildInfo{Version:"v3.0.0", GitCommit:"e29ce2a54e96cd02ccfce88bee4f58bb6e2a28b6", GitTreeState:"clean", GoVersion:"go1.13.4"}Output of
kubectl version
:Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-12T23:40:44Z", GoVersion:"go1.14", Compiler:"gc", Platform:"darwin/amd64"}Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.10-gke.27", GitCommit:"145f9e21a4515947d6fb10819e5a336aff1b6959", GitTreeState:"clean", BuildDate:"2020-02-21T18:01:40Z", GoVersion:"go1.12.12b4", Compiler:"gc", Platform:"linux/amd64"}
Cloud Provider/Platform (AKS, GKE, Minikube etc.): GKE
The text was updated successfully, but these errors were encountered: