Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helm upgrade does not terminate the old POD #7916

Closed
avinashkumar1289 opened this issue Apr 14, 2020 · 11 comments
Closed

Helm upgrade does not terminate the old POD #7916

avinashkumar1289 opened this issue Apr 14, 2020 · 11 comments

Comments

@avinashkumar1289
Copy link

I am facing an issue with the Helm upgrade..I have a POD that has a PVC (not important but i started facing issue when i have added the PVC) .When i do helm upgrade i can see that the OLD POD is not terminated (is still running ) and the new POD is not able to start as it waits for the PVC which is still bound to the earlier OLD pod/container .
I remember the upgrades used to work properly before adding PVC..

Output of helm version: version.BuildInfo{Version:"v3.0.0", GitCommit:"e29ce2a54e96cd02ccfce88bee4f58bb6e2a28b6", GitTreeState:"clean", GoVersion:"go1.13.4"}

Output of kubectl version:Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-12T23:40:44Z", GoVersion:"go1.14", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.10-gke.27", GitCommit:"145f9e21a4515947d6fb10819e5a336aff1b6959", GitTreeState:"clean", BuildDate:"2020-02-21T18:01:40Z", GoVersion:"go1.12.12b4", Compiler:"gc", Platform:"linux/amd64"}

Cloud Provider/Platform (AKS, GKE, Minikube etc.): GKE

@technosophos
Copy link
Member

There's not really enough information in your description to know what is going on. But I can at least point you in the right direction.

Helm does not terminate pods unless --force is specified (which is usually not recommended). It is Kubernetes's job to terminate pods when it detects that the conditions around a pod are sufficiently different to cause it to recreate the pod.

My guess is that in your case, the pod definition did not change from one upgrade to another. Thus Kubernetes did not think it needed to destroy and recreate the pod.

  • You can try using a Deployment instead of a Pod
  • You can try the --force flag

@avinashkumar1289
Copy link
Author

avinashkumar1289 commented Apr 14, 2020

@technosophos Thanks for the update..
But I am using an deployment.yaml for the deployment of POD.I have a deployment and I do a upgrade using Helm.After the upgrade I can see a new POD has come up but the previous/exisiting/OLD pod is still there and not getting terminated. And Since my Pod looks for the volume mount , its keeps on waiting for the volume mount as the volume is mounted with the existing/OLD pod
So basically everything was working fine until I have added PVC in my deployment.yaml.
So i am not sure if this an issue with Helm or Kubernetes or something I am doing wrong ?

@bacongobbler
Copy link
Member

Hey @avinashkumar1289. Without much of a demonstration available, we don't have a way to help you. If you can provide a set of steps using a sample chart scaffolded with helm create, that would be much appreciated. That way we can better understand the issue you are going through.

@avinashkumar1289
Copy link
Author

Hi @bacongobbler So just an update i was able to fix this issue by adding this in deployment.yaml

spec:
  strategy:
   type: Recreate

But basically this is just a work around as this strategy will delete all the existing pod and start them.
But I will try to give you a sample chart.Thanks

@technosophos
Copy link
Member

I will re-iterate what I said before: Kubernetes is responsible for determining when to deploy new pods. Helm allows you to bypass Kubernetes by doing --force, but it is recommended that you do this through Kubernetes instead.

If the old pod is there, Kubernetes believes that it is supposed to be there. So something in your deployment.yaml is indicating to it that having multiple pods running for that deployment is fine. I would suggest trying to do this outside of Helm (run a helm template and then try a few manual deployments with kubectl) and figure out how to do what you desire. Then go back and modify the chart.

@rajendraprasad9
Copy link

Hello avinash,

Please find below solution
Use helm to deploy and manage k8s files is very convenience. But helm upgrade will not recreate pods automatically. Someone will add “--recreate-pods” to force pods recreate.
helm upgrade --recreate-pods -i k8s-dashboard stable/k8s-dashboard
That means you will not have zero downtime when you are doing upgrade.
Fortunately we find a solution. According to this issue report #5218 and this article Deploying on Kubernetes #11: Annotations. We can add some annotations like timestamp of configmap/sercret checksum to spec.template.metadata.annotations of deployment.yaml.
kind: Deployment
metadata:
...
spec:
template:
metadata:
labels:
app: k8s-dashboard
annotations:
timestamp: "{{ date "20060102150405" .Release.Time }}"
...
Kubernetes will notice your pods are updated and rollout new pods without downtime.

@avinashkumar1289
Copy link
Author

@rajendraprasad9 Thanks!!
I surely will try this as currently I am forcing the pods to recreate.

@avinashkumar1289
Copy link
Author

@rajendraprasad9 But do you know why it only happens when there is a PVC attached to my POD..Because without the PVC, I have not faced this issue ?

@github-actions
Copy link

This issue has been marked as stale because it has been open for 90 days with no activity. This thread will be automatically closed in 30 days if no further activity occurs.

@github-actions github-actions bot added the Stale label Aug 21, 2020
@bridgetkromhout
Copy link
Member

Thanks for your insights, @rajendraprasad9! @avinashkumar1289 I'm going to resolve this issue now, as we have given the input we can with the available information. Please open a new issue with more detail if desired.

@tardis4500
Copy link

I suspect this is not an issue with Helm but with the default Kubernetes deployment strategy of RollingUpdate. If the PV/PVC has an accessMode of ReadWriteOnce and the app that is coming up in the new pod needs to write to the volume to pass the readinessProbe, it will not look ready so the old pod will not be destroyed. To solve this, you need to set the deployment strategy to Recreate.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants