-
Notifications
You must be signed in to change notification settings - Fork 7.1k
-
Notifications
You must be signed in to change notification settings - Fork 7.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Helm 3.0.0 beta 4 - can't upgrade charts (existing resource conflict) #6646
Comments
@alemorcuq Thanks for reporting the issue! It looks like you have done some research, if you are interested to work on this and raise a PR, please mention here and go ahead or we will check it out and post our findings 😄 If I have time I'll just dig a bit to understand the issue better, and see if I can help with the root cause |
@alemorcuq With the steps you provided, I'm not able to reproduce the issue. My guess from the logs and what you are trying to do is - looks like there's an existing Deployment named |
Same issue here. Cannot understand why only some of my charts are affected. |
So annoying have to apply these workarounds in my deploy.sh script (this is for Sentry):
|
Still present in 3.0.0-beta.5 BTW |
@Antiarchitect sorry about the issues. But can you help us reproduce the issue? |
Actually I don't know how to reproduce it properly. But when it appears it stays. Personally I have an example when I created my chart and have these dependencies in Chart.yaml:
All of the included charts are created by Helm 2.x obviously. This is from my deployment script:
And after some success and fails (maybe included chart upgrades) deploy starts failing with:
this is for each resource I mentioned in previous comment when I delete it explicitly before deploy |
I believe it happens rather because Helm is trying to create resources instead of updating them or it just doesn't treat these resources created by himself previously. |
This comment has been minimized.
This comment has been minimized.
I've debugged my helm a little bit:
And got:
Seems like kind of mess of versions of everyting I ever had in this chart and all subcharts at all time |
According to the error in this comment #6646 (comment) I feel that there's an existing secret. Could you check it out?
I have to check the code, but another venue for errors, is the But from your error, I don't think anything of that sort happened for the Secret resource. |
This is the output with the same debugging than @Antiarchitect for my case, @karuppiah7890 :
I do have the
Yes, there is the change in the |
Seems this check rely on the keys that include full API versions that doesn't match in our case; |
Still on beta.3 all was fine because that check just not exist |
Helm performs a lookup for the object based on its group (apps), version (v1), and kind (Deployment). Also known as its GroupVersionKind, or GVK. Changing the GVK is considered a compatibility breaker from Kubernetes' point of view, so you cannot "upgrade" those objects to the new GVK in-place. Earlier versions of Helm 3 did not perform the lookup correctly which has since been fixed to match the spec. A larger explanation was provided in #6583. The tl;dr is that since this is considered a breaking API change, you must delete the object from the cluster before you can upgrade to the new GVK. Kubernetes will not allow you to migrate objects in-place from extensions/v1beta1 to apps/v1. There are backwards-incompatible changes between the two schemas, and as such need to be treated as isolated objects that cannot upgrade from one into another. Hope this helps. |
I'll also come up with a strategy for the upgrade and put it as a blog I guess or as a doc if that's acceptable |
Yes. Thank you very much, @bacongobbler, it's all clear now. Also, thank you @Antiarchitect and @karuppiah7890. |
@bacongobbler @karuppiah7890 It seems like the current set of things is incorrect, because I have for example secret v1/Secret/production-app-sentry-redis-ha-managed and it is actually used by redis-ha and created by helm but it counts as not present. From the other side helm counts |
@bacongobbler Have a look at this commit - it's very representative: helm/charts@26c7572 |
In helm#17295 the `apiVersion` of the deployment resource was updated to `apps/v1` in tune with the api's deprecated in k8s 1.16. This change however breaks upgradability of existing drupal deployments with helm v3 (rc) as described in helm/helm#6646. To fix this, we have defined a `drupal.deployment.apiVersion` helper that returns the new `apiVersion` only when k8s 1.16 or higher is in use. Signed-off-by: Sameer Naik <sameersbn@vmware.com>
In helm#17294 the `apiVersion` of the deployment resource was updated to `apps/v1` in tune with the api's deprecated in k8s 1.16. This change however breaks upgradability of existing dokuwiki deployments with helm v3 (rc) as described in helm/helm#6646. To fix this, we have defined a `dokuwiki.deployment.apiVersion` helper that returns the new `apiVersion` only when k8s 1.16 or higher is in use. Signed-off-by: Sameer Naik <sameersbn@vmware.com>
In helm#17301 the `apiVersion` of the deployment resource was updated to `apps/v1` in tune with the api's deprecated in k8s 1.16. This change however breaks upgradability of existing moodle deployments with helm v3 (rc) as described in helm/helm#6646. To fix this, we have defined a `moodle.deployment.apiVersion` helper that returns the new `apiVersion` only when k8s 1.16 or higher is in use. Signed-off-by: Sameer Naik <sameersbn@vmware.com>
In helm#17298 the `apiVersion` of the deployment resource was updated to `apps/v1` in tune with the api's deprecated in k8s 1.16. This change however breaks upgradability of existing jasperreports deployments with helm v3 (rc) as described in helm/helm#6646. To fix this, we have defined a `jasperreports.deployment.apiVersion` helper that returns the new `apiVersion` only when k8s 1.16 or higher is in use. Signed-off-by: Sameer Naik <sameersbn@vmware.com>
In helm#17300 the `apiVersion` of the deployment resource was updated to `apps/v1` in tune with the api's deprecated in k8s 1.16. This change however breaks upgradability of existing mediawiki deployments with helm v3 (rc) as described in helm/helm#6646. To fix this, we have defined a `mediawiki.deployment.apiVersion` helper that returns the new `apiVersion` only when k8s 1.16 or higher is in use. Signed-off-by: Sameer Naik <sameersbn@vmware.com>
In helm#17281 the `apiVersion` of the deployment resource was updated to `apps/v1` in tune with the api's deprecated in k8s 1.16. This change however breaks upgradability of existing postgresql deployments with helm v3 (rc) as described in helm/helm#6646 and also effects any charts that depend on this chart To fix this, we have defined a `postgresql.deployment.apiVersion` helper that returns the new `apiVersion` only when k8s 1.16 or higher is in use. Signed-off-by: Sameer Naik <sameersbn@vmware.com>
In helm#17285 the `apiVersion` of the deployment resource was updated to `apps/v1` in tune with the api's deprecated in k8s 1.16. This change however breaks upgradability of existing kubewatch deployments with helm v3 (rc) as described in helm/helm#6646. To fix this, we have defined a `kubewatch.deployment.apiVersion` helper that returns the new `apiVersion` only when k8s 1.16 or higher is in use. Signed-off-by: Sameer Naik <sameersbn@vmware.com>
In helm#17285 the `apiVersion` of the deployment resource was updated to `apps/v1` in tune with the api's deprecated in k8s 1.16. This change however breaks upgradability of existing kubewatch deployments with helm v3 (rc) as described in helm/helm#6646. To fix this, we have defined a `kubewatch.deployment.apiVersion` helper that returns the new `apiVersion` only when k8s 1.16 or higher is in use. Signed-off-by: Sameer Naik <sameersbn@vmware.com>
In helm#17294 the `apiVersion` of the deployment resource was updated to `apps/v1` in tune with the api's deprecated in k8s 1.16. This change however breaks upgradability of existing dokuwiki deployments with helm v3 (rc) as described in helm/helm#6646. To fix this, we have defined a `dokuwiki.deployment.apiVersion` helper that returns the new `apiVersion` only when k8s 1.16 or higher is in use. Signed-off-by: Sameer Naik <sameersbn@vmware.com>
In helm#17295 the `apiVersion` of the deployment resource was updated to `apps/v1` in tune with the api's deprecated in k8s 1.16. This change however breaks upgradability of existing drupal deployments with helm v3 (rc) as described in helm/helm#6646. To fix this, we have defined a `drupal.deployment.apiVersion` helper that returns the new `apiVersion` only when k8s 1.16 or higher is in use. Signed-off-by: Sameer Naik <sameersbn@vmware.com>
In helm#17294 the `apiVersion` of the deployment resource was updated to `apps/v1` in tune with the api's deprecated in k8s 1.16. This change however breaks upgradability of existing dokuwiki deployments with helm v3 (rc) as described in helm/helm#6646. To fix this, we have defined a `dokuwiki.deployment.apiVersion` helper that returns the new `apiVersion` only when k8s 1.16 or higher is in use. Signed-off-by: Sameer Naik <sameersbn@vmware.com>
FYI, seems highly related to #2947 And I am facing this issue without any api change. |
Confirm. In some cases, it appears without any API changes. |
@bacongobbler We should reopen this case.
And I can confirm using |
@bacongobbler having the same issue with helm |
I am also getting this trying to use helm 3 to update a service that was previously installed using helm 2 |
Yep, just hit this one as will, on a secret. I'm using version 3.0.0. |
@bacongobbler Seems like problem stays. Should this issue be reopened? |
Is there a work around to 'ignore' existing resources? or something? Really hoping to be able to upgrade with out blowing everything away and starting all over. |
Ran into the same issue when upgrading from version 2 to 3. Using the 2to3 migration plugin. |
Same issue v3.0.0. Please re-open! |
same issue in v3.0.0. Please re-open this issue. Tried |
we are observing this issue on 3.0.0 with persistent volume claims, please reopen |
Please follow up with #6850 for issues related to resource creation. The issue raised by @alemorcuq results in the same error, but it is a different diagnosis than the issues being raised here. Thanks. |
one quick fix would be to essentially allow helm to ignore the conflicting resource and go ahead with the release upgrade. I happened to make some changes to the src to make this happen Clone the repo https://github.com/RiflerRick/helm/tree/debug-v3.0.0 and simply run |
the bulk of the code is in Line 192 in e29ce2a
Here is how helm is creating target and current resource lists:
The possible cause
Possible Mitigation One attempt at mitigation was to ignore the conflicting resource, essentially deleting the resource from the current and target arrays. That way helm does not modify the resource at all. The following commit essnetially does that The commit above might result in inconsistent states in the helm secrets, that is something yet to be tested fully. If this is indeed the way to go about then a possible resolution might involve having a flag something like I think this issue should be re-opened to work on it! |
I also believe, that possible reason could be Ctrl+C on helm deploy while using --atomic and trying to deploy after. Could it be Helm does not mark objects created by it (or does not form the release) until some phase and objects created before Ctrl+C are not recognized by Helm afterward? |
I cannot update deployments to api v1 without removing them manually... Can you please fix it? |
I am locking this thread for the time being. As explained above, this thread and #6850 are separate issues. I want to be respectful of the OP's time: the issue originally raised here has since been resolved. Please carry on the conversation in #6850. Thanks! |
Hi, Bitnami developer here. When trying to upgrade a chart to a newer version using
helm v3 beta 4
, it fails due to an existing resource conflict. This never happened neither withhelm v2
nor with the previous beta versions ofhelm v3
. It seems to be related with commit 36f3a4b.I would like to know if this is a bug or the expected behaviour, and in that case try to understand why this is happening. This is happening to all our charts.
Steps to reproduce:
Note that running the above command with
helm v3 beta 3
instead works correctly.Output of
helm version
:Output of
kubectl version
:Cloud Provider/Platform (AKS, GKE, Minikube etc.):
GKE
The text was updated successfully, but these errors were encountered: