Skip to content

helm-operator - failed to install release: rendered manifests contain a resource that already exists #3329

@esara

Description

@esara

Bug Report

What did you do?
We have started building operators with v0.8.x, which in the case of the helm-operator was creating a helm2 release, later/automatically upgraded to a helm3 release with v0.13.x - v0.17.x based operators.
We started to have problems as we further upgraded the same operator/custom resource from v0.17.x to a v0.18.x based helm operator.

Specifically custom resources created with a helm2 operator, the helm2 release is named as (instead of just plain RELEASENAME)
$ kubectl get secrets RELEASENAME-84bxdj3wmek999m7mr3hnknlp.v1
NAME TYPE DATA AGE
RELEASENAME-84bxdj3wmek999m7mr3hnknlp.v1 Opaque 1 17s

What did you expect to see?
after upgrading the operator to a helm3 operator (1.17.1 operator sdk), it automatically upgrades to helm3
$ kubectl logs -f t8c-operator-75f4d6f79-tt927
{"level":"info","ts":1593364045.881045,"logger":"helm.controller","msg":"Updated release","namespace":"turbonomic","name":"xl-release","apiVersion":"charts.helm.k8s.io/v1alpha1","kind":"Xl","release":"xl-release-84bxdj3wmek999m7mr3hnknlp","force":false}
$ helm list -A
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
RELEASENAME-84bxdj3wmek999m7mr3hnknlp turbonomic 5 2020-06-28 17:07:24.960187627 +0000 UTC deployed xl-1.6.0 7.22
$ kubectl get secrets sh.helm.release.v1.RELEASENAME-84bxdj3wmek999m7mr3hnknlp.v5
NAME TYPE DATA AGE
sh.helm.release.v1.RELEASENAME-84bxdj3wmek999m7mr3hnknlp.v5 helm.sh/release.v1 1 88s
$ kubectl logs -f t8c-operator-75f4d6f79-tt927
{"level":"info","ts":1593364047.209766,"logger":"helm.controller","msg":"Reconciled release","namespace":"turbonomic","name":"xl-release","apiVersion":"charts.helm.k8s.io/v1alpha1","kind":"Xl","release":"xl-release-84bxdj3wmek999m7mr3hnknlp"}

What did you see instead? Under which circumstances?
but if I upgrade to the 1.18.x based helm operator, it fails to apply any changes, with
$ kubectl logs -f t8c-operator-6988fd7b6c-ls55s
{"level":"error","ts":1593364226.4781153,"logger":"helm.controller","msg":"Release failed","namespace":"turbonomic","name":"xl-release","apiVersion":"charts.helm.k8s.io/v1alpha1","kind":"Xl","release":"xl-release","error":"failed to install release: rendered manifests contain a resource that already exists. Unable to continue with install: PersistentVolumeClaim "rsyslog-auditlogdata" in namespace "turbonomic" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "xl-release"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "turbonomic"","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\tpkg/mod/github.com/go-logr/zapr@v0.1.1/zapr.go:128\ngithub.com/operator-framework/operator-sdk/pkg/helm/controller.HelmOperatorReconciler.Reconcile\n\tsrc/github.com/operator-framework/operator-sdk/pkg/helm/controller/reconcile.go:181\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\tpkg/mod/sigs.k8s.io/controller-runtime@v0.6.0/pkg/internal/controller/controller.go:256\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\tpkg/mod/sigs.k8s.io/controller-runtime@v0.6.0/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\tpkg/mod/sigs.k8s.io/controller-runtime@v0.6.0/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\tpkg/mod/k8s.io/apimachinery@v0.18.2/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\tpkg/mod/k8s.io/apimachinery@v0.18.2/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\tpkg/mod/k8s.io/apimachinery@v0.18.2/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\tpkg/mod/k8s.io/apimachinery@v0.18.2/pkg/util/wait/wait.go:90"}

the actual helm chart used by the operator is the same…

Environment

  • operator-sdk version:
    upgrading from v0.17.x to v0.18.x

  • Kubernetes version information:
    $ kubectl version
    Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:56:40Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"darwin/amd64"}
    Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.10", GitCommit:"575467a0eaf3ca1f20eb86215b3bde40a5ae617a", GitTreeState:"clean", BuildDate:"2019-12-11T12:32:32Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}

and

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:56:40Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.3+a637491", GitCommit:"a637491", GitTreeState:"clean", BuildDate:"2020-06-05T15:48:59Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}

Possible Solution
tried to do helm/helm#7649 but it deleted and recreated the resources and failed anyway

Additional context
also looked at helm/helm#8078 but this is a different problem, the helm chart is valid and installs fine (with both helm3 and helm2) and upgrades fine with v0.17.x (or earlier)

Metadata

Metadata

Labels

language/helmIssue is related to a Helm operator projecttriage/supportIndicates an issue that is a support question.

Type

No type

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions