Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helm 3.0.0 beta 4 - can't upgrade charts (existing resource conflict) #6646

Closed
alemorcuq opened this issue Oct 11, 2019 · 37 comments
Closed

Comments

@alemorcuq
Copy link

alemorcuq commented Oct 11, 2019

Hi, Bitnami developer here. When trying to upgrade a chart to a newer version using helm v3 beta 4, it fails due to an existing resource conflict. This never happened neither with helm v2 nor with the previous beta versions of helm v3. It seems to be related with commit 36f3a4b.

I would like to know if this is a bug or the expected behaviour, and in that case try to understand why this is happening. This is happening to all our charts.

Steps to reproduce:

  1. Add the stable repository
helm repo add stable https://kubernetes-charts.storage.googleapis.com
  1. Grab and install an older version of a chart (e.g. dokuwiki)
helm fetch --untar --untardir . stable/dokuwiki --version 5.0.0
helm install doku ./dokuwiki --namespace dokuwiki
  1. Try to upgrade to the latest version
helm upgrade doku stable/dokuwiki --namespace dokuwiki 
Error: UPGRADE FAILED: rendered manifests contain a new resource that already exists. Unable to continue with update: existing resource conflict: kind: Deployment, namespace: amoreno, name: doku-dokuwiki

Note that running the above command with helm v3 beta 3 instead works correctly.

Output of helm version:

version.BuildInfo{Version:"v3.0.0-beta.4", GitCommit:"7ffc879f137bd3a69eea53349b01f05e3d1d2385", GitTreeState:"dirty", GoVersion:"go1.13.1"}

Output of kubectl version:

Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.3", GitCommit:"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0", GitTreeState:"clean", BuildDate:"2019-06-06T01:44:30Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.6-gke.2", GitCommit:"c9de33b5439df6e206d7ba646787c6ace92d737b", GitTreeState:"clean", BuildDate:"2019-09-06T18:30:33Z", GoVersion:"go1.12.9b4", Compiler:"gc", Platform:"linux/amd64"}

Cloud Provider/Platform (AKS, GKE, Minikube etc.):
GKE

@karuppiah7890
Copy link
Contributor

@alemorcuq Thanks for reporting the issue! It looks like you have done some research, if you are interested to work on this and raise a PR, please mention here and go ahead or we will check it out and post our findings 😄 If I have time I'll just dig a bit to understand the issue better, and see if I can help with the root cause

@karuppiah7890
Copy link
Contributor

@alemorcuq With the steps you provided, I'm not able to reproduce the issue. My guess from the logs and what you are trying to do is - looks like there's an existing Deployment named doku-dokuwiki in namespace amoreno somehow - not created/managed using helm. Hence the error

@Antiarchitect
Copy link
Contributor

Same issue here. Cannot understand why only some of my charts are affected.

@Antiarchitect
Copy link
Contributor

Antiarchitect commented Oct 26, 2019

So annoying have to apply these workarounds in my deploy.sh script (this is for Sentry):

if [[ $1 == "--remove-existing" ]]; then
    echo "Removing existing entities"
    kubectl -n "$namespace" delete cm "$release_name"
    kubectl -n "$namespace" delete cm "${release_name}-redis-ha-managed-configmap"
    kubectl -n "$namespace" delete deploy "${release_name}-cron"
    kubectl -n "$namespace" delete deploy "${release_name}-web"
    kubectl -n "$namespace" delete deploy "${release_name}-worker"
    kubectl -n "$namespace" delete deploy "${release_name}-redis-ha-managed-haproxy"
    kubectl -n "$namespace" delete ing "$release_name"
    kubectl -n "$namespace" delete role "${release_name}-redis-ha-managed"
    kubectl -n "$namespace" delete rolebinding "${release_name}-redis-ha-managed"
    kubectl -n "$namespace" delete sa "$release_name"
    kubectl -n "$namespace" delete sa "${release_name}-redis-ha-managed"
    kubectl -n "$namespace" delete sa "${release_name}-redis-ha-managed-haproxy"
    kubectl -n "$namespace" delete secret "$release_name"
    kubectl -n "$namespace" delete secret "${release_name}-redis-ha-managed"
    kubectl -n "$namespace" delete svc "$release_name"
    kubectl -n "$namespace" delete svc "${release_name}-redis-ha-managed"
    kubectl -n "$namespace" delete svc "${release_name}-redis-ha-managed-announce-0"
    kubectl -n "$namespace" delete svc "${release_name}-redis-ha-managed-announce-1"
    kubectl -n "$namespace" delete svc "${release_name}-redis-ha-managed-announce-2"
    kubectl -n "$namespace" delete svc "${release_name}-redis-ha-managed-haproxy"
    kubectl -n "$namespace" delete statefulset "${release_name}-redis-ha-managed-server"
fi

@Antiarchitect
Copy link
Contributor

Still present in 3.0.0-beta.5 BTW

@karuppiah7890
Copy link
Contributor

@Antiarchitect sorry about the issues. But can you help us reproduce the issue?

@Antiarchitect
Copy link
Contributor

Antiarchitect commented Oct 28, 2019

Actually I don't know how to reproduce it properly. But when it appears it stays. Personally I have an example when I created my chart and have these dependencies in Chart.yaml:

- name: patroni
  alias: patroni-managed
  repository: https://kubernetes-charts-incubator.storage.googleapis.com/
  version: 0.14.0
- name: redis-ha
  alias: redis-ha-managed
  version: 3.9.2
  repository: https://kubernetes-charts.storage.googleapis.com/
- name: minio
  alias: minio-managed
  version: 2.5.16
  repository: https://kubernetes-charts.storage.googleapis.com/
- name: sentry
  version: 3.1.0
  repository: https://kubernetes-charts.storage.googleapis.com/

All of the included charts are created by Helm 2.x obviously. This is from my deployment script:

helm upgrade "$release_name" . \
     --atomic \
     --debug \
     --install \
     --namespace "$namespace" \
     --reset-values \
     --values "$var_file_name"

And after some success and fails (maybe included chart upgrades) deploy starts failing with:

upgrade.go:87: [debug] performing update for production-app-sentry
Error: UPGRADE FAILED: rendered manifests contain a new resource that already exists. Unable to continue with update: existing resource conflict: kind: Secret, namespace: production-app-sentry, name: production-app-sentry-redis-ha-managed
helm.go:81: [debug] existing resource conflict: kind: Secret, namespace: production-app-sentry, name: production-app-sentry-redis-ha-managed
rendered manifests contain a new resource that already exists. Unable to continue with update
helm.sh/helm/v3/pkg/action.(*Upgrade).performUpgrade
        /home/circleci/helm.sh/helm/pkg/action/upgrade.go:216

this is for each resource I mentioned in previous comment when I delete it explicitly before deploy

@Antiarchitect
Copy link
Contributor

Antiarchitect commented Oct 29, 2019

I believe it happens rather because Helm is trying to create resources instead of updating them or it just doesn't treat these resources created by himself previously.

@Antiarchitect

This comment has been minimized.

@Antiarchitect
Copy link
Contributor

Antiarchitect commented Oct 29, 2019

I've debugged my helm a little bit:

	existingResources := make(map[string]bool)

	fmt.Println("CURRENT")
	for _, r := range current {
		fmt.Println(objectKey(r))
		existingResources[objectKey(r)] = true
	}

	var toBeCreated kube.ResourceList
	fmt.Println("TOBECREATED")
	for _, r := range target {
		if !existingResources[objectKey(r)] {
			fmt.Println(objectKey(r))
			toBeCreated = append(toBeCreated, r)
		}
	}

	if err := existingResourceConflict(toBeCreated); err != nil {
		return nil, errors.Wrap(err, "rendered manifests contain a new resource that already exists. Unable to continue with update")
	}

And got:

CURRENT
v1/Secret/production-app-sentry-minio-managed
v1/Secret/production-app-sentry-patroni-managed
v1/Secret/production-app-sentry-postgresql-managed
v1/Secret/production-app-sentry-redis-managed
v1/Secret/production-app-sentry-enapter-registry
v1/Secret/production-app-sentry-postgres-backup-env
v1/ConfigMap/production-app-sentry-minio-managed
v1/ConfigMap/production-app-sentry-redis-managed
v1/ConfigMap/production-app-sentry-redis-managed-health
v1/PersistentVolumeClaim/production-app-sentry-minio-ceph
v1/PersistentVolumeClaim/production-app-sentry-postgres-ceph
v1/ServiceAccount/production-app-sentry-minio-managed
v1/ServiceAccount/production-app-sentry-patroni-managed
rbac.authorization.k8s.io/v1beta1/Role/production-app-sentry-patroni-managed
rbac.authorization.k8s.io/v1beta1/RoleBinding/production-app-sentry-patroni-managed
v1/Service/production-app-sentry-minio-managed
v1/Service/production-app-sentry-patroni-managed
v1/Service/production-app-sentry-postgresql-managed-headless
v1/Service/production-app-sentry-postgresql-managed
v1/Service/production-app-sentry-redis-managed-headless
v1/Service/production-app-sentry-redis-managed-master
apps/v1/Deployment/production-app-sentry-minio-managed
apps/v1beta1/StatefulSet/production-app-sentry-patroni-managed
apps/v1/StatefulSet/production-app-sentry-postgresql-managed
apps/v1/StatefulSet/production-app-sentry-redis-managed-master
batch/v1beta1/CronJob/production-app-sentry-postgres-backup
v1/Endpoints/production-app-sentry-patroni-managed
TOBECREATED
v1/Secret/production-app-sentry-postgresql
v1/Secret/production-app-sentry-redis-ha-managed
v1/Secret/production-app-sentry-redis
v1/Secret/production-app-sentry
v1/ConfigMap/production-app-sentry-redis-ha-managed-configmap
v1/ConfigMap/production-app-sentry-redis
v1/ConfigMap/production-app-sentry-redis-health
v1/ConfigMap/production-app-sentry
v1/ServiceAccount/production-app-sentry-redis-ha-managed
v1/ServiceAccount/production-app-sentry-redis-ha-managed-haproxy
rbac.authorization.k8s.io/v1/Role/production-app-sentry-redis-ha-managed
rbac.authorization.k8s.io/v1/RoleBinding/production-app-sentry-redis-ha-managed
v1/Service/production-app-sentry-postgresql-headless
v1/Service/production-app-sentry-postgresql
v1/Service/production-app-sentry-redis-ha-managed-announce-2
v1/Service/production-app-sentry-redis-ha-managed-announce-1
v1/Service/production-app-sentry-redis-ha-managed-announce-0
v1/Service/production-app-sentry-redis-ha-managed
v1/Service/production-app-sentry-redis-ha-managed-haproxy
v1/Service/production-app-sentry-redis-headless
v1/Service/production-app-sentry-redis-master
v1/Service/production-app-sentry-redis-slave
v1/Service/production-app-sentry
apps/v1/Deployment/production-app-sentry-redis-ha-managed-haproxy
extensions/v1beta1/Deployment/production-app-sentry-cron
extensions/v1beta1/Deployment/production-app-sentry-web
extensions/v1beta1/Deployment/production-app-sentry-worker
apps/v1/StatefulSet/production-app-sentry-postgresql
apps/v1/StatefulSet/production-app-sentry-redis-ha-managed-server
apps/v1/StatefulSet/production-app-sentry-redis-master
apps/v1/StatefulSet/production-app-sentry-redis-slave
extensions/v1beta1/Ingress/production-app-sentry

Seems like kind of mess of versions of everyting I ever had in this chart and all subcharts at all time

@karuppiah7890
Copy link
Contributor

karuppiah7890 commented Oct 29, 2019

According to the error in this comment #6646 (comment)

I feel that there's an existing secret. Could you check it out?

kind: Secret, namespace: production-app-sentry, name: production-app-sentry-redis-ha-managed

I have to check the code, but another venue for errors, is the apiVersion, and looks like you are already checking that out by printing the current and tobecreated, where tobecreated tells what resources are to be created. And issues can occur if existing resources (manifests) have different apiVersion for the same resource. Just for example extensions/v1beta1/Deployment could be the installed one, from the old chart and apps/v1/Deployment could be the new one from the new chart. This is just a hunch.

But from your error, I don't think anything of that sort happened for the Secret resource.

@alemorcuq
Copy link
Author

alemorcuq commented Oct 29, 2019

This is the output with the same debugging than @Antiarchitect for my case, @karuppiah7890 :

❯ ./helm-3.0.0-beta.5/bin/helm upgrade doku stable/dokuwiki --namespace doku-amoreno
CURRENT
v1/Secret/doku-dokuwiki
v1/PersistentVolumeClaim/doku-dokuwiki-dokuwiki
v1/Service/doku-dokuwiki
extensions/v1beta1/Deployment/doku-dokuwiki
TOBECREATED
apps/v1/Deployment/doku-dokuwiki
Error: UPGRADE FAILED: rendered manifests contain a new resource that already exists. Unable to continue with update: existing resource conflict: kind: Deployment, namespace: doku-amoreno, name: doku-dokuwiki

I do have the doku-dokuwiki deployment, which was created installing a previous version of the chart with Helm.

❯ k get all -n doku-amoreno --show-labels
NAME                                READY   STATUS    RESTARTS   AGE     LABELS
pod/doku-dokuwiki-5c478cd57-x8brz   1/1     Running   0          8m44s   app=dokuwiki,chart=dokuwiki-5.0.0,pod-template-hash=5c478cd57,release=doku

NAME                            READY   UP-TO-DATE   AVAILABLE   AGE     LABELS
deployment.apps/doku-dokuwiki   1/1     1            1           8m44s   app=dokuwiki,chart=dokuwiki-5.0.0,heritage=Helm,release=doku

NAME                                      DESIRED   CURRENT   READY   AGE     LABELS
replicaset.apps/doku-dokuwiki-5c478cd57   1         1         1       8m44s   app=dokuwiki,chart=dokuwiki-5.0.0,pod-template-hash=5c478cd57,release=doku

Yes, there is the change in the apiVersion (since Kubernetes 1.16 dropped support of some old API versions), but there wasn't any issue upgrading until beta 4.

@Antiarchitect
Copy link
Contributor

Seems this check rely on the keys that include full API versions that doesn't match in our case;

@Antiarchitect
Copy link
Contributor

Still on beta.3 all was fine because that check just not exist

@bacongobbler
Copy link
Member

bacongobbler commented Oct 29, 2019

Helm performs a lookup for the object based on its group (apps), version (v1), and kind (Deployment). Also known as its GroupVersionKind, or GVK. Changing the GVK is considered a compatibility breaker from Kubernetes' point of view, so you cannot "upgrade" those objects to the new GVK in-place. Earlier versions of Helm 3 did not perform the lookup correctly which has since been fixed to match the spec.

A larger explanation was provided in #6583. The tl;dr is that since this is considered a breaking API change, you must delete the object from the cluster before you can upgrade to the new GVK. Kubernetes will not allow you to migrate objects in-place from extensions/v1beta1 to apps/v1. There are backwards-incompatible changes between the two schemas, and as such need to be treated as isolated objects that cannot upgrade from one into another.

Hope this helps.

@karuppiah7890
Copy link
Contributor

I'll also come up with a strategy for the upgrade and put it as a blog I guess or as a doc if that's acceptable

@alemorcuq
Copy link
Author

Yes. Thank you very much, @bacongobbler, it's all clear now.

Also, thank you @Antiarchitect and @karuppiah7890.

@Antiarchitect
Copy link
Contributor

Antiarchitect commented Oct 30, 2019

@bacongobbler @karuppiah7890 It seems like the current set of things is incorrect, because I have for example secret v1/Secret/production-app-sentry-redis-ha-managed and it is actually used by redis-ha and created by helm but it counts as not present. From the other side helm counts v1/Secret/production-app-sentry-redis-managed as present in the release, but it was deleted long time ago and I do not use it anymore. Is there a way to edit release manually. Because constant deletion of resources via kubectl delete does not help.

@Antiarchitect
Copy link
Contributor

@bacongobbler Have a look at this commit - it's very representative: helm/charts@26c7572
We have STS with the data we cannot afford to delete. And new chart version upgrades API version of STS. What should we do?

sameersbn pushed a commit to sameersbn/kubernetes-charts that referenced this issue Nov 8, 2019
In helm#17295 the `apiVersion` of the deployment resource was
updated to `apps/v1` in tune with the api's deprecated in k8s 1.16. This change however breaks
upgradability of existing drupal deployments with helm v3 (rc) as described in
helm/helm#6646.

To fix this, we have defined a `drupal.deployment.apiVersion` helper that returns the new
`apiVersion` only when k8s 1.16 or higher is in use.

Signed-off-by: Sameer Naik <sameersbn@vmware.com>
sameersbn pushed a commit to sameersbn/kubernetes-charts that referenced this issue Nov 8, 2019
In helm#17294 the `apiVersion` of the deployment resource was
updated to `apps/v1` in tune with the api's deprecated in k8s 1.16. This change however breaks
upgradability of existing dokuwiki deployments with helm v3 (rc) as described in
helm/helm#6646.

To fix this, we have defined a `dokuwiki.deployment.apiVersion` helper that returns the new
`apiVersion` only when k8s 1.16 or higher is in use.

Signed-off-by: Sameer Naik <sameersbn@vmware.com>
sameersbn pushed a commit to sameersbn/kubernetes-charts that referenced this issue Nov 8, 2019
In helm#17301 the `apiVersion` of the deployment resource was
updated to `apps/v1` in tune with the api's deprecated in k8s 1.16. This change however breaks
upgradability of existing moodle deployments with helm v3 (rc) as described in
helm/helm#6646.

To fix this, we have defined a `moodle.deployment.apiVersion` helper that returns the new
`apiVersion` only when k8s 1.16 or higher is in use.

Signed-off-by: Sameer Naik <sameersbn@vmware.com>
sameersbn pushed a commit to sameersbn/kubernetes-charts that referenced this issue Nov 8, 2019
In helm#17298 the `apiVersion` of the deployment resource was
updated to `apps/v1` in tune with the api's deprecated in k8s 1.16. This change however breaks
upgradability of existing jasperreports deployments with helm v3 (rc) as described in
helm/helm#6646.

To fix this, we have defined a `jasperreports.deployment.apiVersion` helper that returns the new
`apiVersion` only when k8s 1.16 or higher is in use.

Signed-off-by: Sameer Naik <sameersbn@vmware.com>
sameersbn pushed a commit to sameersbn/kubernetes-charts that referenced this issue Nov 8, 2019
In helm#17300 the `apiVersion` of the deployment resource was
updated to `apps/v1` in tune with the api's deprecated in k8s 1.16. This change however breaks
upgradability of existing mediawiki deployments with helm v3 (rc) as described in
helm/helm#6646.

To fix this, we have defined a `mediawiki.deployment.apiVersion` helper that returns the new
`apiVersion` only when k8s 1.16 or higher is in use.

Signed-off-by: Sameer Naik <sameersbn@vmware.com>
sameersbn pushed a commit to sameersbn/kubernetes-charts that referenced this issue Nov 8, 2019
In helm#17281 the `apiVersion` of the deployment resource was
updated to `apps/v1` in tune with the api's deprecated in k8s 1.16. This change however breaks
upgradability of existing postgresql deployments with helm v3 (rc) as described in
helm/helm#6646 and also effects any charts that depend on this chart

To fix this, we have defined a `postgresql.deployment.apiVersion` helper that returns the new
`apiVersion` only when k8s 1.16 or higher is in use.

Signed-off-by: Sameer Naik <sameersbn@vmware.com>
sameersbn pushed a commit to sameersbn/kubernetes-charts that referenced this issue Nov 8, 2019
In helm#17285 the `apiVersion` of the deployment resource was
updated to `apps/v1` in tune with the api's deprecated in k8s 1.16. This change however breaks
upgradability of existing kubewatch deployments with helm v3 (rc) as described in
helm/helm#6646.

To fix this, we have defined a `kubewatch.deployment.apiVersion` helper that returns the new
`apiVersion` only when k8s 1.16 or higher is in use.

Signed-off-by: Sameer Naik <sameersbn@vmware.com>
sameersbn pushed a commit to sameersbn/kubernetes-charts that referenced this issue Nov 8, 2019
In helm#17285 the `apiVersion` of the deployment resource was
updated to `apps/v1` in tune with the api's deprecated in k8s 1.16. This change however breaks
upgradability of existing kubewatch deployments with helm v3 (rc) as described in
helm/helm#6646.

To fix this, we have defined a `kubewatch.deployment.apiVersion` helper that returns the new
`apiVersion` only when k8s 1.16 or higher is in use.

Signed-off-by: Sameer Naik <sameersbn@vmware.com>
sameersbn pushed a commit to sameersbn/kubernetes-charts that referenced this issue Nov 8, 2019
In helm#17294 the `apiVersion` of the deployment resource was
updated to `apps/v1` in tune with the api's deprecated in k8s 1.16. This change however breaks
upgradability of existing dokuwiki deployments with helm v3 (rc) as described in
helm/helm#6646.

To fix this, we have defined a `dokuwiki.deployment.apiVersion` helper that returns the new
`apiVersion` only when k8s 1.16 or higher is in use.

Signed-off-by: Sameer Naik <sameersbn@vmware.com>
sameersbn pushed a commit to sameersbn/kubernetes-charts that referenced this issue Nov 8, 2019
In helm#17295 the `apiVersion` of the deployment resource was
updated to `apps/v1` in tune with the api's deprecated in k8s 1.16. This change however breaks
upgradability of existing drupal deployments with helm v3 (rc) as described in
helm/helm#6646.

To fix this, we have defined a `drupal.deployment.apiVersion` helper that returns the new
`apiVersion` only when k8s 1.16 or higher is in use.

Signed-off-by: Sameer Naik <sameersbn@vmware.com>
sameersbn pushed a commit to sameersbn/kubernetes-charts that referenced this issue Nov 8, 2019
In helm#17294 the `apiVersion` of the deployment resource was
updated to `apps/v1` in tune with the api's deprecated in k8s 1.16. This change however breaks
upgradability of existing dokuwiki deployments with helm v3 (rc) as described in
helm/helm#6646.

To fix this, we have defined a `dokuwiki.deployment.apiVersion` helper that returns the new
`apiVersion` only when k8s 1.16 or higher is in use.

Signed-off-by: Sameer Naik <sameersbn@vmware.com>
@bakayolo
Copy link

FYI, seems highly related to #2947

And I am facing this issue without any api change.
On my end, it's a bunch of secrets.

@Antiarchitect
Copy link
Contributor

Confirm. In some cases, it appears without any API changes.

@bakayolo
Copy link

@bacongobbler We should reopen this case.
I am getting hit a lot by this issue.
And it's only on secrets.

Error: UPGRADE FAILED: rendered manifests contain a new resource that already exists. Unable to continue with update: existing resource conflict: kind: Secret, namespace: istio-system, name: kintossl-hk

And I can confirm using helm get all my-release than the kintossl-hk secret is in the release.

@danielefranceschi
Copy link

@bacongobbler having the same issue with helm Version:"v3.0.0", gitCommit:"e29ce2a54e96cd02ccfce88bee4f58bb6e2a28b6" and a ConfigMap

@tomaustin700
Copy link

I am also getting this trying to use helm 3 to update a service that was previously installed using helm 2
Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: kind: Service, namespace: ingress-basic, name: front

@FrederikNJS
Copy link

Yep, just hit this one as will, on a secret. I'm using version 3.0.0.

@Antiarchitect
Copy link
Contributor

@bacongobbler Seems like problem stays. Should this issue be reopened?

@bnsmith17
Copy link

Is there a work around to 'ignore' existing resources? or something? Really hoping to be able to upgrade with out blowing everything away and starting all over.

@phillycheeze
Copy link

Ran into the same issue when upgrading from version 2 to 3. Using the 2to3 migration plugin.

@marcadella
Copy link

Same issue v3.0.0. Please re-open!

@RiflerRick
Copy link

same issue in v3.0.0. Please re-open this issue. Tried
--cleanup-on-fail, --atomic, --force almost everything to mitigate this issue, obviously helm is not able to detect first hand that the resource was already created by itself.
Not able to understand how this issue can be re-created.

@rossmckelvie
Copy link

we are observing this issue on 3.0.0 with persistent volume claims, please reopen

@bacongobbler
Copy link
Member

bacongobbler commented Nov 26, 2019

Please follow up with #6850 for issues related to resource creation. The issue raised by @alemorcuq results in the same error, but it is a different diagnosis than the issues being raised here. Thanks.

@RiflerRick
Copy link

one quick fix would be to essentially allow helm to ignore the conflicting resource and go ahead with the release upgrade. I happened to make some changes to the src to make this happen
RiflerRick@1ee55a3

Clone the repo https://github.com/RiflerRick/helm/tree/debug-v3.0.0 and simply run make build. That will create a binary in ./bin. You can use that binary.

@RiflerRick
Copy link

RiflerRick commented Nov 28, 2019

one quick fix would be to essentially allow helm to ignore the conflicting resource and go ahead with the release upgrade. I happened to make some changes to the src to make this happen
RiflerRick@1ee55a3

Clone the repo https://github.com/RiflerRick/helm/tree/debug-v3.0.0 and simply run make build. That will create a binary in ./bin. You can use that binary.

the bulk of the code is in

func (u *Upgrade) performUpgrade(originalRelease, upgradedRelease *release.Release) (*release.Release, error) {

Here is how helm is creating target and current resource lists:

  • current is coming from the current release
  • target is coming from the manifest we provided

The possible cause

  • it then creates an existingResources array containing each element of current corresponding to that elements GVK(group, version, kind)
  • next it creates a toBeCreated array from target such that elements that do not match the GVK of the existingResources are added to toBeCreated
  • It then queries kubernetes to find out any resource among toBeCreated that already exists. If so then the conflict occurs. In essence any resource whose GVK changes after helm had already created the resource will face this conflict
  • If there are no conflicts, it executes Update(...) on the target
  • Looks like GVK comes directly from kubernetes and it should never change in general unless someone tampers the resource group, version or kind.
  • func (u *Upgrade) performUpgrade(originalRelease, upgradedRelease *release.Release) (*release.Release, error) {
    is responsible for actually performing the upgrade
  • The most probable cause of the issue is a possible tampering of the resources without using helm.
    func objectKey(r *resource.Info) string {
    This function is responsible for checking the GVK equivalent for helm to compare resources. If at any point, this string changes between the existing and current resources, the conflict issue would occur
  • The conflict error is actually a failsafe from the side of helm. It should in theory only occur when resources that are created outside of helm are being created later with helm. In other scenarios this error should not come which is why there is an github issue on this: Helm 3.0.0 beta 4 - can't upgrade charts (existing resource conflict) #6646
    The fact that the issue is arising suggests that either GVK from kubernetes is being returned incorrectly for an existing resource or that helm is somehow corrupting its secret while trying to write to it resulting to its false notion that a particular resource does not exist.

Possible Mitigation

One attempt at mitigation was to ignore the conflicting resource, essentially deleting the resource from the current and target arrays. That way helm does not modify the resource at all. The following commit essnetially does that
RiflerRick@1ee55a3

The commit above might result in inconsistent states in the helm secrets, that is something yet to be tested fully. If this is indeed the way to go about then a possible resolution might involve having a flag something like --ignore-conflicts that will execute portions of the code referenced by that commit

I think this issue should be re-opened to work on it!

@Antiarchitect
Copy link
Contributor

I also believe, that possible reason could be Ctrl+C on helm deploy while using --atomic and trying to deploy after. Could it be Helm does not mark objects created by it (or does not form the release) until some phase and objects created before Ctrl+C are not recognized by Helm afterward?

@sheerun
Copy link
Contributor

sheerun commented Nov 28, 2019

I cannot update deployments to api v1 without removing them manually... Can you please fix it?

@bacongobbler
Copy link
Member

bacongobbler commented Nov 28, 2019

I am locking this thread for the time being. As explained above, this thread and #6850 are separate issues. I want to be respectful of the OP's time: the issue originally raised here has since been resolved. Please carry on the conversation in #6850. Thanks!

@helm helm locked as resolved and limited conversation to collaborators Nov 28, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests