Skip to content
This repository has been archived by the owner on Feb 22, 2022. It is now read-only.

Helm upgrade fails with existing persistent volume claims #1472

Closed
bajacondor opened this issue Jul 11, 2017 · 17 comments
Closed

Helm upgrade fails with existing persistent volume claims #1472

bajacondor opened this issue Jul 11, 2017 · 17 comments

Comments

@bajacondor
Copy link

Hello,

When running a helm upgrade, I am specifying an existing pvc (which was created dynamically during the helm install) I'd like the upgraded helm release to continue to use that same volume. Unfortunately, the helm upgrade deletes the pvc before the deployment can attach it.

Is there another way to do this?

Thank you.

@SacDin
Copy link

SacDin commented Jan 14, 2018

Solution ?

@iamrandys
Copy link

We're running into the same issue. Is there some way to record the dynamically created PV's volume ID on installations so that the helm chart can be installed again an use the same volumes. I would think everyone would want to do this.

@SacDin
Copy link

SacDin commented Jan 15, 2018

This issue should be re-opened. @bajacondor

@Antiarchitect
Copy link
Contributor

Same here:

Error: release xxxxxxxxxxxx failed: persistentvolumes "xxxxxxxxxxxxxxx" already exists

I get from helm install ...

@wernight
Copy link

Why is this closed? It's really useful.

@Antiarchitect
Copy link
Contributor

Antiarchitect commented Jan 26, 2018

Actually, my problem was here: I haven't set up .helmignore properly and helm tried to push too much data (I have something about several GB's in this folder) and my release rolled out partially. So Helm thinks he has no release with this name but named PV's and PVC's were deployed. When I put non-relative parts of the folder to .helmignore, all worked like a charm.

@lamjo
Copy link

lamjo commented Oct 12, 2018

Please re-open. We're running into the same issue.

@jonesg504
Copy link

I am also running into this issue, shouldnt it ignore the persistent volume if its already there?

@mstryaou
Copy link

Same problem. Any graceful solution?

@27Bslash6
Copy link
Contributor

We're also getting this issue, apparently randomly. Retrying eventually works.

@sunvim
Copy link

sunvim commented Nov 12, 2019

plz re-open, it's very useful and important

@botjaeger
Copy link

what happened to this issue?

@kenperkins
Copy link
Contributor

We are also running into this issue, would love to know from @27Bslash6 if the retrying behavior was consistent? We've attempted retries a few times and have had no success. I also bubbled up this issue in the Slack channel for #helm-dev.

@27Bslash6
Copy link
Contributor

Sorry, I don't work much with Helm anymore @kenperkins

Our deployment automation just attempted to retry failed deploys at least 5 times with BEBO delay, which appeared to mitigate / obfuscate the problem.

@novitoll
Copy link

Ditto

@LeiYangGH
Copy link

shouldn't be closed without any reason.

@tardis4500
Copy link

I suspect this is not an issue with Helm but with the default Kubernetes deployment strategy of RollingUpdate. If the PV/PVC has an accessMode of ReadWriteOnce and the app that is coming up in the new pod needs to write to the volume to pass the readinessProbe, it will not look ready so the old pod will not be destroyed. To solve this, you need to set the deployment strategy to Recreate.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests