New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helm delete does not clean the custom resource definitions #7688

Closed
nixgadget opened this Issue Aug 7, 2018 · 37 comments

Comments

Projects
None yet
@nixgadget
Copy link

nixgadget commented Aug 7, 2018

Helm:

Client: &version.Version{SemVer:"v2.10.0-rc.2", GitCommit:"56154102a2f25ebf679c791907fd355bb0377f05", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.10.0-rc.2", GitCommit:"56154102a2f25ebf679c791907fd355bb0377f05", GitTreeState:"clean"}

Istio: 1.0.0

Kubectl:

Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.7", GitCommit:"dd5e1a2978fd0b97d9b78e1564398aeea7e7fe92", GitTreeState:"clean", BuildDate:"2018-04-19T00:05:56Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:43:26Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}

Following a deletion of Istio 1.0.0 using helm delete --purge I notice that it leaves the crds as residue and a reinstall causes the error,

Error: customresourcedefinitions.apiextensions.k8s.io "gateways.networking.istio.io" already exists

In Tiller logs, I can see,

[tiller] 2018/08/07 12:07:28 executing 55 post-delete hooks for is
[kube] 2018/08/07 12:07:28 building resources from manifest
[kube] 2018/08/07 12:07:28 creating 1 resource(s)

However, the resources remain,

k get customresourcedefinitions | grep istio
adapters.config.istio.io                      1h
apikeys.config.istio.io                       1h
attributemanifests.config.istio.io            1h
authorizations.config.istio.io                1h
bypasses.config.istio.io                      1h
checknothings.config.istio.io                 1h
circonuses.config.istio.io                    1h
deniers.config.istio.io                       1h
destinationrules.networking.istio.io          1h
edges.config.istio.io                         1h
envoyfilters.networking.istio.io              1h
fluentds.config.istio.io                      1h
gateways.networking.istio.io                  1h
handlers.config.istio.io                      1h
httpapispecbindings.config.istio.io           1h
httpapispecs.config.istio.io                  1h
instances.config.istio.io                     1h
....

Has anyone noticed this with Helm v2.10.0-rc.2 ?

@ymesika

This comment has been minimized.

Copy link
Contributor

ymesika commented Aug 7, 2018

That's right. In 1.0.0 there CRDs were taken out of the Helm management into their own YAML file and we require the users (who install with Helm) to first install that CRDs yaml.

Therefore, since they are unmanaged Helm won't delete them. Similar to the installation the users are expected to delete them by executing kubectl delete -f install/kubernetes/helm/istio/templates/crds.yaml -n istio-system.

@nixgadget

This comment has been minimized.

Copy link

nixgadget commented Aug 7, 2018

@ymesika what you have stated which reflects in the Istio installation steps (https://istio.io/docs/setup/kubernetes/helm-install/#installation-steps),

If using a Helm version prior to 2.10.0, install Istio’s Custom Resource Definitions via kubectl apply, and wait a few seconds for the CRDs to be committed in the kube-apiserver

is right for Helm < 2.10.0 versions.

What about versions > 2.10.0 ?
I was under the impression that with Helm > 2.10.0 there is the possibility to inject crds with crd-install hooks and I can see that this is already enabled in Istio 1.0.0.

I posted a question on Helm about this as well, helm/helm#4440

@ymesika

This comment has been minimized.

Copy link
Contributor

ymesika commented Aug 7, 2018

Thanks for clarifying. Yes, that was the plan.
cc @linsun

@sdake sdake self-assigned this Aug 7, 2018

@sdake sdake added this to the 1.0 milestone Aug 7, 2018

@nixgadget

This comment has been minimized.

Copy link

nixgadget commented Aug 7, 2018

@sdake happy to test it out when you are done. thanks for looking into it.

@squillace

This comment has been minimized.

Copy link

squillace commented Aug 24, 2018

me, too. I just tested this with 2.10 and 1.0 of istio, same error. did this happen?

@munrodg munrodg modified the milestones: 1.0, 1.1 Nov 2, 2018

@amyroh

This comment has been minimized.

Copy link

amyroh commented Nov 6, 2018

Since helm version > 2.10.0 does not require to install Istio's CRDs on a separate command via kubectl apply, I think it's fair to expect helm delete istio --purge should delete the CRDs without having to explicitly delete them.

@linsun

This comment has been minimized.

Copy link
Member

linsun commented Nov 6, 2018

I think this is a helm issue where the delete istio --purge didn't clean up CDRs properly even if you are already on helm 2.10, I'm glad you are raising this with helm github repo directly too.

@nixgadget

This comment has been minimized.

Copy link

nixgadget commented Nov 6, 2018

Yeah I ended up having a custom helm job to achieve this using,

"helm.sh/hook": post-delete
"helm.sh/hook-delete-policy": hook-succeeded
@Keisone

This comment has been minimized.

Copy link

Keisone commented Nov 7, 2018

Helm:

Client: &version.Version{SemVer:"v2.10.0-rc.2", GitCommit:"56154102a2f25ebf679c791907fd355bb0377f05", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.10.0-rc.2", GitCommit:"56154102a2f25ebf679c791907fd355bb0377f05", GitTreeState:"clean"}

Istio: 1.0.0

Kubectl:

Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.7", GitCommit:"dd5e1a2978fd0b97d9b78e1564398aeea7e7fe92", GitTreeState:"clean", BuildDate:"2018-04-19T00:05:56Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:43:26Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}

Following a deletion of Istio 1.0.0 using helm delete --purge I notice that it leaves the crds as residue and a reinstall causes the error,

Error: customresourcedefinitions.apiextensions.k8s.io "gateways.networking.istio.io" already exists

In Tiller logs, I can see,

[tiller] 2018/08/07 12:07:28 executing 55 post-delete hooks for is
[kube] 2018/08/07 12:07:28 building resources from manifest
[kube] 2018/08/07 12:07:28 creating 1 resource(s)

However, the resources remain,

k get customresourcedefinitions | grep istio
adapters.config.istio.io                      1h
apikeys.config.istio.io                       1h
attributemanifests.config.istio.io            1h
authorizations.config.istio.io                1h
bypasses.config.istio.io                      1h
checknothings.config.istio.io                 1h
circonuses.config.istio.io                    1h
deniers.config.istio.io                       1h
destinationrules.networking.istio.io          1h
edges.config.istio.io                         1h
envoyfilters.networking.istio.io              1h
fluentds.config.istio.io                      1h
gateways.networking.istio.io                  1h
handlers.config.istio.io                      1h
httpapispecbindings.config.istio.io           1h
httpapispecs.config.istio.io                  1h
instances.config.istio.io                     1h
....

Has anyone noticed this with Helm v2.10.0-rc.2 ?

I had the same issue and I had to delete manually:

kubectl delete crd gateways.networking.istio.io

@nixgadget

This comment has been minimized.

Copy link

nixgadget commented Nov 7, 2018

For anyone wanting to fix this until a permanent fix is being released,

{{- if .Values.global.rbacEnabled }}
apiVersion: v1
kind: ServiceAccount
metadata:
  name: {{ template "istio.customJob" . }}-cc-sa
  labels:
    app: {{ template "istio.name" . }}
    chart: {{ .Chart.Name }}-{{ .Chart.Version }}
    heritage: {{ .Release.Service }}
    istio: customjob-cc
    release: {{ .Release.Name }}
  annotations:
    "helm.sh/hook": post-delete
    "helm.sh/hook-delete-policy": hook-succeeded
    "helm.sh/hook-weight": "1"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: {{ template "istio.customJob" . }}-cc-cr
  labels:
    app: {{ template "istio.name" . }}
    chart: {{ .Chart.Name }}-{{ .Chart.Version }}
    heritage: {{ .Release.Service }}
    istio: customjob-cc
    release: {{ .Release.Name }}
  annotations:
    "helm.sh/hook": post-delete
    "helm.sh/hook-delete-policy": hook-succeeded
    "helm.sh/hook-weight": "1"
rules:
- apiGroups: ["apiextensions.k8s.io"]
  resources: ["customresourcedefinitions"]
  verbs: ["list", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: {{ template "istio.customJob" . }}-cc-crb
  labels:
    app: {{ template "istio.name" . }}
    chart: {{ .Chart.Name }}-{{ .Chart.Version }}
    heritage: {{ .Release.Service }}
    istio: customjob-cc
    release: {{ .Release.Name }}
  annotations:
    "helm.sh/hook": post-delete
    "helm.sh/hook-delete-policy": hook-succeeded
    "helm.sh/hook-weight": "2"
subjects:
- name: {{ template "istio.customJob" . }}-cc-sa
  kind: ServiceAccount
  namespace: {{ .Release.Namespace }}
roleRef:
  name: {{ template "istio.customJob" . }}-cc-cr
  kind: ClusterRole
  apiGroup: rbac.authorization.k8s.io
---
{{- end }}
apiVersion: batch/v1
kind: Job
metadata:
  name: {{ template "istio.customJob" . }}-cc
  labels:
    app: {{ template "istio.name" . }}
    chart: {{ .Chart.Name }}-{{ .Chart.Version }}
    heritage: {{ .Release.Service }}
    istio: customjob-cc
    release: {{ .Release.Name }}
  annotations:
    "helm.sh/hook": post-delete
    "helm.sh/hook-delete-policy": hook-succeeded
    "helm.sh/hook-weight": "3"
spec:
  template:
    metadata:
      name: {{ template "istio.customJob" . }}-cc
      labels:
        app: {{ template "istio.name" . }}
        istio: customjob-cc
        release: {{ .Release.Name }}
    spec:
      {{- if .Values.global.rbacEnabled }}
      serviceAccountName: {{ template "istio.customJob" . }}-cc-sa
      {{- end }}
      restartPolicy: OnFailure
      affinity:
        {{- if .Values.global.kubectl.nodeAffinity }}
        nodeAffinity:
{{ toYaml .Values.global.kubectl.nodeAffinity | indent 10 }}
        {{- end }}
        {{- if eq .Values.global.kubectl.antiAffinity "hard" }}
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - topologyKey: kubernetes.io/hostname
              labelSelector:
                matchLabels:
                  app: {{ template "istio.name" . }}
                  istio: customjob-cc
                  release: {{ .Release.Name }}
        {{- else if eq .Values.global.kubectl.antiAffinity "soft" }}
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 1
            podAffinityTerm:
              topologyKey: kubernetes.io/hostname
              labelSelector:
                matchLabels:
                  app: {{ template "istio.name" . }}
                  istio: customjob-cc
                  release: {{ .Release.Name }}
        {{- end }}
      {{- if .Values.global.kubectl.nodeSelector }}
      nodeSelector:
{{ toYaml .Values.global.kubectl.nodeSelector | indent 8 }}
      {{- end }}
      containers:
        - name: {{ template "istio.customJob" . }}-cc-kubectl
          image: {{ .Values.global.image.repo }}/{{ .Values.global.kubectl.image }}:{{ .Values.global.image.tag }}
          imagePullPolicy: {{ .Values.global.image.pullPolicy }}
          command:
          - /bin/bash
          - -c
          - >
              kubectl get customresourcedefinitions | grep "istio.io" | while read -r entry; do
                name=$(echo $entry | awk '{print $1}');
                kubectl delete customresourcedefinitions $name;
              done
@jeremyxu2010

This comment has been minimized.

Copy link

jeremyxu2010 commented Nov 8, 2018

template "istio.customJob" not defined, how to resolve it?

For anyone wanting to fix this until a permanent fix is being released,

{{- if .Values.global.rbacEnabled }}
apiVersion: v1
kind: ServiceAccount
metadata:
  name: {{ template "istio.customJob" . }}-cc-sa
  labels:
    app: {{ template "istio.name" . }}
    chart: {{ .Chart.Name }}-{{ .Chart.Version }}
    heritage: {{ .Release.Service }}
    istio: customjob-cc
    release: {{ .Release.Name }}
  annotations:
    "helm.sh/hook": post-delete
    "helm.sh/hook-delete-policy": hook-succeeded
    "helm.sh/hook-weight": "1"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: {{ template "istio.customJob" . }}-cc-cr
  labels:
    app: {{ template "istio.name" . }}
    chart: {{ .Chart.Name }}-{{ .Chart.Version }}
    heritage: {{ .Release.Service }}
    istio: customjob-cc
    release: {{ .Release.Name }}
  annotations:
    "helm.sh/hook": post-delete
    "helm.sh/hook-delete-policy": hook-succeeded
    "helm.sh/hook-weight": "1"
rules:
- apiGroups: ["apiextensions.k8s.io"]
  resources: ["customresourcedefinitions"]
  verbs: ["list", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: {{ template "istio.customJob" . }}-cc-crb
  labels:
    app: {{ template "istio.name" . }}
    chart: {{ .Chart.Name }}-{{ .Chart.Version }}
    heritage: {{ .Release.Service }}
    istio: customjob-cc
    release: {{ .Release.Name }}
  annotations:
    "helm.sh/hook": post-delete
    "helm.sh/hook-delete-policy": hook-succeeded
    "helm.sh/hook-weight": "2"
subjects:
- name: {{ template "istio.customJob" . }}-cc-sa
  kind: ServiceAccount
  namespace: {{ .Release.Namespace }}
roleRef:
  name: {{ template "istio.customJob" . }}-cc-cr
  kind: ClusterRole
  apiGroup: rbac.authorization.k8s.io
---
{{- end }}
apiVersion: batch/v1
kind: Job
metadata:
  name: {{ template "istio.customJob" . }}-cc
  labels:
    app: {{ template "istio.name" . }}
    chart: {{ .Chart.Name }}-{{ .Chart.Version }}
    heritage: {{ .Release.Service }}
    istio: customjob-cc
    release: {{ .Release.Name }}
  annotations:
    "helm.sh/hook": post-delete
    "helm.sh/hook-delete-policy": hook-succeeded
    "helm.sh/hook-weight": "3"
spec:
  template:
    metadata:
      name: {{ template "istio.customJob" . }}-cc
      labels:
        app: {{ template "istio.name" . }}
        istio: customjob-cc
        release: {{ .Release.Name }}
    spec:
      {{- if .Values.global.rbacEnabled }}
      serviceAccountName: {{ template "istio.customJob" . }}-cc-sa
      {{- end }}
      restartPolicy: OnFailure
      affinity:
        {{- if .Values.global.kubectl.nodeAffinity }}
        nodeAffinity:
{{ toYaml .Values.global.kubectl.nodeAffinity | indent 10 }}
        {{- end }}
        {{- if eq .Values.global.kubectl.antiAffinity "hard" }}
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - topologyKey: kubernetes.io/hostname
              labelSelector:
                matchLabels:
                  app: {{ template "istio.name" . }}
                  istio: customjob-cc
                  release: {{ .Release.Name }}
        {{- else if eq .Values.global.kubectl.antiAffinity "soft" }}
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 1
            podAffinityTerm:
              topologyKey: kubernetes.io/hostname
              labelSelector:
                matchLabels:
                  app: {{ template "istio.name" . }}
                  istio: customjob-cc
                  release: {{ .Release.Name }}
        {{- end }}
      {{- if .Values.global.kubectl.nodeSelector }}
      nodeSelector:
{{ toYaml .Values.global.kubectl.nodeSelector | indent 8 }}
      {{- end }}
      containers:
        - name: {{ template "istio.customJob" . }}-cc-kubectl
          image: {{ .Values.global.image.repo }}/{{ .Values.global.kubectl.image }}:{{ .Values.global.image.tag }}
          imagePullPolicy: {{ .Values.global.image.pullPolicy }}
          command:
          - /bin/bash
          - -c
          - >
              kubectl get customresourcedefinitions | grep "istio.io" | while read -r entry; do
                name=$(echo $entry | awk '{print $1}');
                kubectl delete customresourcedefinitions $name;
              done
@nixgadget

This comment has been minimized.

Copy link

nixgadget commented Nov 8, 2018

@jeremyxu2010 you should set your own metadata.
but if you want to follow my example you can simply set the following in the _helpers.tpl

{{- define "istio.customJob" -}}
{{- template "istio.fullname" . -}}-custom-job
{{- end -}}

The important part is the helm annotations, cluster role if you have rbac and the command to delete. let me know if this doesnt make any sense.

@nixgadget

This comment has been minimized.

Copy link

nixgadget commented Nov 8, 2018

Heres a much simplified version of the above for the curious,

apiVersion: v1
kind: ServiceAccount
metadata:
  name: istio-custom-job-cc-sa
  annotations:
    "helm.sh/hook": post-delete
    "helm.sh/hook-delete-policy": hook-succeeded
    "helm.sh/hook-weight": "1"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: istio-custom-job-cc-cr
  annotations:
    "helm.sh/hook": post-delete
    "helm.sh/hook-delete-policy": hook-succeeded
    "helm.sh/hook-weight": "1"
rules:
- apiGroups: ["apiextensions.k8s.io"]
  resources: ["customresourcedefinitions"]
  verbs: ["list", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: istio-custom-job-cc-crb
  annotations:
    "helm.sh/hook": post-delete
    "helm.sh/hook-delete-policy": hook-succeeded
    "helm.sh/hook-weight": "2"
subjects:
- name: istio-custom-job-cc-sa
  kind: ServiceAccount
  namespace: {{ .Release.Namespace }}
roleRef:
  name: istio-custom-job-cc-cr
  kind: ClusterRole
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: batch/v1
kind: Job
metadata:
  name: istio-custom-job-cc
  annotations:
    "helm.sh/hook": post-delete
    "helm.sh/hook-delete-policy": hook-succeeded
    "helm.sh/hook-weight": "3"
spec:
  template:
    metadata:
      name: istio-custom-job-cc
    spec:
      serviceAccountName: istio-custom-job-cc-sa
      restartPolicy: OnFailure
      affinity:
      containers:
        - name: istio-custom-job-cc-kubectl
          image: gcr.io/istio-release/kubectl:release-1.1-20181101-09-15
          imagePullPolicy: IfNotPresent
          command:
          - /bin/bash
          - -c
          - >
              kubectl get customresourcedefinitions | grep "istio.io" | while read -r entry; do
                name=$(echo $entry | awk '{print $1}');
                kubectl delete customresourcedefinitions $name;
              done
@jeremyxu2010

This comment has been minimized.

Copy link

jeremyxu2010 commented Nov 9, 2018

@nixgadget it works, thank you very much!

because i delete istio failed before(i must use internal hub's kubectl image repository), i must cleanup the serviceaccount, clusterrole, clusterrolebinding of istio:

kubectl get serviceaccount | grep 'istio'|awk '{print $1}'|xargs kubectl delete serviceaccount
kubectl get clusterrole | grep 'istio'|awk '{print $1}'|xargs kubectl delete clusterrole
kubectl get clusterrolebindings | grep 'istio'|awk '{print $1}'|xargs kubectl delete clusterrolebindings

kubectl get serviceaccount -n istio-system | grep 'istio'|awk '{print $1}'|xargs kubectl delete serviceaccount  -n istio-system
kubectl get clusterrole  -n istio-system | grep 'istio'|awk '{print $1}'|xargs kubectl delete clusterrole  -n istio-system
kubectl get clusterrolebindings  -n istio-system | grep 'istio'|awk '{print $1}'|xargs kubectl delete clusterrolebindings  -n istio-system
@AbrahamAlcaina

This comment has been minimized.

Copy link

AbrahamAlcaina commented Nov 20, 2018

And add this one

kubectl get customresourcedefinition  -n istio-system | grep 'istio'|awk '{print $1}'|xargs kubectl delete customresourcedefinition  -n istio-system
@sdake

This comment has been minimized.

Copy link
Member

sdake commented Nov 25, 2018

Yeah I ended up having a custom helm job to achieve this using,

"helm.sh/hook": post-delete
"helm.sh/hook-delete-policy": hook-succeeded

I'd highly recommend against this solution. This will make upgrades very difficult for you in the future.

@sdake

This comment has been minimized.

Copy link
Member

sdake commented Nov 25, 2018

Ok peeps, apologies for my silence so far on this thread. I have been sorting out a path for CRDs to work properly in a Helm upgrade scenario. That work is here: #10120

It is more important that Helm upgrade work than helm delete --purge work for the case of dangling CRDs. I have commented on several issues with Helm upstream, and the conclusion I am coming to is crd-install is not a priority for 2.y series. As Helm is being completely reworked in the 3.y.z series, crd-install may longer be a solution.

In summary, crd-install does not work in a helm upgrade scenario. It has many negative side effects depending on where you upgraded from and to using crd-install which we are just finding out about now.

Since most people on this issue tracker are using Helm 2.10+ with istio 1.0.z, I want to provide you a smooth upgrade experience. I am unclear if the CRDs can be removed in an automated way. They certainly can't via helm delete --purge as they are unmanaged objects. The Helm community is well aware of this limitation and offers no solutions.

in the meantime, I'd encourage folks to use the two-step installation/removal process. I believe this causes crd-install to be a noop.

** Note **

The helm community has indicated the unmanaged resources in CRDs are a conscious choice so that people do not lose their custom resource information during a helm delete --purge. Instead, you have to work a little harder to remove it. This is logically sound, although clearly not ideal for many individuals - especially those doing evaluations.

use caution

If you are in eval, The solution from @AbrahamAlcaina works well enough #7688 (comment). Another solution is kubectl delete -f install/kubernetes/helm/istio/templates/crds.yaml. Note this will delete all of your existing custom resources, which you may not care about if in an evaluation but certainly care about if your in production.

@sdake

This comment has been minimized.

Copy link
Member

sdake commented Nov 25, 2018

Heres a much simplified version of the above for the curious,

apiVersion: v1
kind: ServiceAccount
metadata:
  name: istio-custom-job-cc-sa
  annotations:
    "helm.sh/hook": post-delete
    "helm.sh/hook-delete-policy": hook-succeeded
    "helm.sh/hook-weight": "1"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: istio-custom-job-cc-cr
  annotations:
    "helm.sh/hook": post-delete
    "helm.sh/hook-delete-policy": hook-succeeded
    "helm.sh/hook-weight": "1"
rules:
- apiGroups: ["apiextensions.k8s.io"]
  resources: ["customresourcedefinitions"]
  verbs: ["list", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: istio-custom-job-cc-crb
  annotations:
    "helm.sh/hook": post-delete
    "helm.sh/hook-delete-policy": hook-succeeded
    "helm.sh/hook-weight": "2"
subjects:
- name: istio-custom-job-cc-sa
  kind: ServiceAccount
  namespace: {{ .Release.Namespace }}
roleRef:
  name: istio-custom-job-cc-cr
  kind: ClusterRole
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: batch/v1
kind: Job
metadata:
  name: istio-custom-job-cc
  annotations:
    "helm.sh/hook": post-delete
    "helm.sh/hook-delete-policy": hook-succeeded
    "helm.sh/hook-weight": "3"
spec:
  template:
    metadata:
      name: istio-custom-job-cc
    spec:
      serviceAccountName: istio-custom-job-cc-sa
      restartPolicy: OnFailure
      affinity:
      containers:
        - name: istio-custom-job-cc-kubectl
          image: gcr.io/istio-release/kubectl:release-1.1-20181101-09-15
          imagePullPolicy: IfNotPresent
          command:
          - /bin/bash
          - -c
          - >
              kubectl get customresourcedefinitions | grep "istio.io" | while read -r entry; do
                name=$(echo $entry | awk '{print $1}');
                kubectl delete customresourcedefinitions $name;
              done

@nixgadget thanks for digging into the code base. I haven't tested if this would work, however, we have had ALOT of problems with helm hooks and are removing/have removed them from the code base.

As such, if you use this approach, you may end up with a Helm chart that can't be upgraded. I just thought you should know.

Cheers
-steve

@jeremyxu2010

This comment has been minimized.

Copy link

jeremyxu2010 commented Nov 25, 2018

Yeah I ended up having a custom helm job to achieve this using,

"helm.sh/hook": post-delete
"helm.sh/hook-delete-policy": hook-succeeded

I'd highly recommend against this solution. This will make upgrades very difficult for you in the future.

@sdake maybe should use "helm.sh/hook-delete-policy": "before-hook-creation", see here

@sdake

This comment has been minimized.

Copy link
Member

sdake commented Nov 27, 2018

Hey folks,

The Helm upstream recommends letting CRDs leak with the hook policy of crd-install. The reason the CRDs should leak is because the human operator should have full control over the deletion of the mesh configuration which is stored in the CRDs. As a result, I am marking this closed as it works as intended. For those folks that need to be able to clean up in their evaluations, there is the documentation here:
https://istio.io/docs/setup/kubernetes/helm-install/#uninstall

Note the last step:

If desired, delete the CRDs:

$ kubectl delete -f install/kubernetes/helm/istio/templates/crds.yaml -n istio-system

Cheers
-steve

@sdake sdake closed this Nov 27, 2018

@sdake

This comment has been minimized.

Copy link
Member

sdake commented Nov 27, 2018

@jeremyxu2010 at this point, I can't recommend using any type of hook feature in Helm. We are migrating away from hooks in Istio upstream as soon as viable.

Cheers
-steve

@sdake

This comment has been minimized.

Copy link
Member

sdake commented Nov 27, 2018

Since helm version > 2.10.0 does not require to install Istio's CRDs on a separate command via kubectl apply, I think it's fair to expect helm delete istio --purge should delete the CRDs without having to explicitly delete them.

@amyroh we are moving away from crd-install as it is full of various problems. One of the most egregious is that CRDs cannot be added during an upgrade. There are other problems - well described here:
#9604 (comment)

I am hopeful that Helm upstream can repair these problems in the future, but it is clear this will not happen prior to the Istio 1.1 release, where we must fix these problems soon.

@nixgadget

This comment has been minimized.

Copy link

nixgadget commented Nov 27, 2018

@sdake Thanks for the heads up steve. It makes sense when it comes to helm upgrade. So whats the plan of attack in 1.1 which is due soonish ?

@sdake

This comment has been minimized.

Copy link
Member

sdake commented Nov 30, 2018

@nixgadget the upgrade issue is being worked here:
#9884

Cheers
-steve

@MandarJKulkarni

This comment has been minimized.

Copy link

MandarJKulkarni commented Dec 4, 2018

I am facing this issue even though I have done:
helm del --purge istio
kubectl delete -f .\install\kubernetes\istio-demo.yaml
kubectl delete -f install/kubernetes/helm/istio/templates/crds.yaml -n istio-system

But,
helm install install/kubernetes/helm/istio --name istio --namespace istio-system
still gives me
Error: release istio failed: customresourcedefinitions.apiextensions.k8s.io "deniers.config.istio.io" already exists

I am on Windows10.
istio-1.1.0-snapshot.3
helm-v2.12.0-rc.1-windows-amd64

@CodeJjang

This comment has been minimized.

Copy link

CodeJjang commented Dec 8, 2018

Also having this issue despite deleting (--purge) istio release AND deleting the crds.yaml as specified here and in the docs.

helm install istio-1.0.4/install/kubernetes/helm/istio --name istio --namespace istio-system
Error: release istio failed: customresourcedefinitions.apiextensions.k8s.io "instances.config.istio.io" already exists

And more awkwardly:

kubectl get customresourcedefinitions -n istio-system
No resources found.

So I'm kind of stuck. Can't remove the custom resource definitions at all.

Ubuntu 16.04
Istio 1.0.4
Helm v.2.12.0

markmandel added a commit to markmandel/agones that referenced this issue Dec 8, 2018

This is a problem that Helm is going to solve going forward, but for now
if you use the crd-install hook, then you can only install CRDs, and not
update them at any point during a chart lifecycle.

Also, prior to Helm 2.12, if you installed chart with a crd-install hook
that did not have one previously, it deleted the CRDs.

Therefore, removing the crd-install hook, so that CRDs are again managed
by the Helm charts.

Added a `agones.crd.install` parameter, in case someone wants to subchart
this chart, then can set this to false, and copy the Agones CRDs into
their own charts to be included in the right place for their chart lifecycle.

Also, since we have a `agones.crd` config section, moved
`agones.enableHelmCleanupHooks` into `agones.crds.cleanupOnDelete`

Unfortunately, with this back and forth on the crd-install hook, if you are
using the Helm chart, you will need to do a full Agones
`helm delete --purge` and cleanup any remaining CRDs to upgrade.

More context on helm + crds:
- helm/helm#4697
- istio/istio#9604
- istio/istio#7688
- helm/community#64
- helm/helm#4863
- helm/helm#4709

markmandel added a commit to markmandel/agones that referenced this issue Dec 8, 2018

This is a problem that Helm is going to solve going forward, but for now
if you use the crd-install hook, then you can only install CRDs, and not
update them at any point during a chart lifecycle.

Also, prior to Helm 2.12, if you installed chart with a crd-install hook
that did not have one previously, it deleted the CRDs.

Therefore, removing the crd-install hook, so that CRDs are again managed
by the Helm charts.

Added a `agones.crd.install` parameter, in case someone wants to subchart
this chart, then can set this to false, and copy the Agones CRDs into
their own charts to be included in the right place for their chart lifecycle.

Also, since we have a `agones.crd` config section, moved
`agones.enableHelmCleanupHooks` into `agones.crds.cleanupOnDelete`

Unfortunately, with this back and forth on the crd-install hook, if you are
using the Helm chart, you will need to do a full Agones
`helm delete --purge` and cleanup any remaining CRDs to upgrade.

More context on helm + crds:
- helm/helm#4697
- istio/istio#9604
- istio/istio#7688
- helm/community#64
- helm/helm#4863
- helm/helm#4709

markmandel added a commit to markmandel/agones that referenced this issue Dec 8, 2018

Remove crd-install hook, as it break CRD updates
This is a problem that Helm is going to solve going forward, but for now
if you use the crd-install hook, then you can only install CRDs, and not
update them at any point during a chart lifecycle.

Also, prior to Helm 2.12, if you installed chart with a crd-install hook
that did not have one previously, it deleted the CRDs.

Therefore, removing the crd-install hook, so that CRDs are again managed
by the Helm charts.

Added a `agones.crd.install` parameter, in case someone wants to subchart
this chart, then can set this to false, and copy the Agones CRDs into
their own charts to be included in the right place for their chart lifecycle.

Also, since we have a `agones.crd` config section, moved
`agones.enableHelmCleanupHooks` into `agones.crds.cleanupOnDelete`

Unfortunately, with this back and forth on the crd-install hook, if you are
using the Helm chart, you will need to do a full Agones
`helm delete --purge` and cleanup any remaining CRDs to upgrade.

More context on helm + crds:
- helm/helm#4697
- istio/istio#9604
- istio/istio#7688
- helm/community#64
- helm/helm#4863
- helm/helm#4709

markmandel added a commit to GoogleCloudPlatform/agones that referenced this issue Dec 9, 2018

Remove crd-install hook, as it break CRD updates
This is a problem that Helm is going to solve going forward, but for now
if you use the crd-install hook, then you can only install CRDs, and not
update them at any point during a chart lifecycle.

Also, prior to Helm 2.12, if you installed chart with a crd-install hook
that did not have one previously, it deleted the CRDs.

Therefore, removing the crd-install hook, so that CRDs are again managed
by the Helm charts.

Added a `agones.crd.install` parameter, in case someone wants to subchart
this chart, then can set this to false, and copy the Agones CRDs into
their own charts to be included in the right place for their chart lifecycle.

Also, since we have a `agones.crd` config section, moved
`agones.enableHelmCleanupHooks` into `agones.crds.cleanupOnDelete`

Unfortunately, with this back and forth on the crd-install hook, if you are
using the Helm chart, you will need to do a full Agones
`helm delete --purge` and cleanup any remaining CRDs to upgrade.

More context on helm + crds:
- helm/helm#4697
- istio/istio#9604
- istio/istio#7688
- helm/community#64
- helm/helm#4863
- helm/helm#4709
@cstrahan

This comment has been minimized.

Copy link

cstrahan commented Dec 10, 2018

Same issue here:

$ kubectl get customresourcedefinitions
No resources found.

$ helm install install/kubernetes/helm/istio --name istio --namespace istio-system
Error: release istio failed: customresourcedefinitions.apiextensions.k8s.io "edges.config.istio.io" already exists

$ kubectl get customresourcedefinitions
NAME                                    CREATED AT
adapters.config.istio.io                2018-12-10T22:49:34Z
apikeys.config.istio.io                 2018-12-10T22:49:34Z
attributemanifests.config.istio.io      2018-12-10T22:49:34Z
authorizations.config.istio.io          2018-12-10T22:49:34Z
bypasses.config.istio.io                2018-12-10T22:49:34Z
checknothings.config.istio.io           2018-12-10T22:49:34Z
circonuses.config.istio.io              2018-12-10T22:49:34Z
deniers.config.istio.io                 2018-12-10T22:49:34Z
destinationrules.networking.istio.io    2018-12-10T22:49:34Z
edges.config.istio.io                   2018-12-10T22:49:34Z
envoyfilters.networking.istio.io        2018-12-10T22:49:34Z
fluentds.config.istio.io                2018-12-10T22:49:34Z
gateways.networking.istio.io            2018-12-10T22:49:34Z
handlers.config.istio.io                2018-12-10T22:49:34Z
httpapispecbindings.config.istio.io     2018-12-10T22:49:34Z
httpapispecs.config.istio.io            2018-12-10T22:49:34Z
instances.config.istio.io               2018-12-10T22:49:34Z
kubernetesenvs.config.istio.io          2018-12-10T22:49:34Z
kuberneteses.config.istio.io            2018-12-10T22:49:34Z
listcheckers.config.istio.io            2018-12-10T22:49:34Z
listentries.config.istio.io             2018-12-10T22:49:34Z
logentries.config.istio.io              2018-12-10T22:49:34Z
memquotas.config.istio.io               2018-12-10T22:49:34Z
metrics.config.istio.io                 2018-12-10T22:49:34Z
noops.config.istio.io                   2018-12-10T22:49:34Z
opas.config.istio.io                    2018-12-10T22:49:34Z
prometheuses.config.istio.io            2018-12-10T22:49:34Z
quotas.config.istio.io                  2018-12-10T22:49:34Z
quotaspecbindings.config.istio.io       2018-12-10T22:49:34Z
quotaspecs.config.istio.io              2018-12-10T22:49:34Z
rbacconfigs.rbac.istio.io               2018-12-10T22:49:34Z
rbacs.config.istio.io                   2018-12-10T22:49:34Z
redisquotas.config.istio.io             2018-12-10T22:49:34Z
reportnothings.config.istio.io          2018-12-10T22:49:34Z
rules.config.istio.io                   2018-12-10T22:49:34Z
servicecontrolreports.config.istio.io   2018-12-10T22:49:34Z
servicecontrols.config.istio.io         2018-12-10T22:49:34Z
serviceentries.networking.istio.io      2018-12-10T22:49:34Z
servicerolebindings.rbac.istio.io       2018-12-10T22:49:34Z
serviceroles.rbac.istio.io              2018-12-10T22:49:34Z
signalfxs.config.istio.io               2018-12-10T22:49:34Z
solarwindses.config.istio.io            2018-12-10T22:49:34Z
stackdrivers.config.istio.io            2018-12-10T22:49:34Z
statsds.config.istio.io                 2018-12-10T22:49:34Z
stdios.config.istio.io                  2018-12-10T22:49:34Z
templates.config.istio.io               2018-12-10T22:49:34Z
tracespans.config.istio.io              2018-12-10T22:49:34Z
virtualservices.networking.istio.io     2018-12-10T22:49:34Z
@CodeJjang

This comment has been minimized.

Copy link

CodeJjang commented Dec 10, 2018

@cstrahan

This comment has been minimized.

Copy link

cstrahan commented Dec 10, 2018

@CodeJjang I'll take a look, thanks.

I'll also note that this resulted in the same error:

$ helm install install/kubernetes/helm/istio --name istio --namespace istio-system --set global.crds=false

I was hoping that would prevent Helm/Tiller from creating the CRDs (allowing me to manage them myself), but it looks like it still attempts to create them.

@cstrahan

This comment has been minimized.

Copy link

cstrahan commented Dec 10, 2018

@CodeJjang Reverting to helm/tiller v2.11.0 (both as the client and in the cluster) resolved this for me -- thanks! Is there an issue open in the helm issue tracker? If there is, I'm having a hard time finding it. As of now, the latest release (v2.12.0) of helm/tiller seems quite broken.

I had to install helm v2.11.0, run helm reset --force, delete any remaining CRDs, and then install istio via helm as per the docs.

@MandarJKulkarni

This comment has been minimized.

Copy link

MandarJKulkarni commented Dec 11, 2018

@cstrahan I installed v2.11.0 for helm.
But helm reset --force didn't work for me.
The command responds stating sever is uninstalled, however when I do helm list or helm delete, I again get the incompatible versions error.
Maybe this is differet issue for Win10.

@nixgadget

This comment has been minimized.

Copy link

nixgadget commented Dec 11, 2018

@MandarJKulkarni what does helm --version show ?

@nixgadget

This comment has been minimized.

Copy link

nixgadget commented Dec 11, 2018

I would hold off on 2.12 upgrade as well. Some improvements were done to crd-install hook in helm/helm#4709 which may be causing these issues.

@MandarJKulkarni

This comment has been minimized.

Copy link

MandarJKulkarni commented Dec 11, 2018

@MandarJKulkarni what does helm --version show ?

Client: &version.Version{SemVer:"v2.11.0"
Server: &version.Version{SemVer:"v2.12.0-rc.1"

This was what it was, but I ended up deleting the cluster now as it was not having any app.

@nixgadget

This comment has been minimized.

Copy link

nixgadget commented Dec 11, 2018

@MandarJKulkarni great. glad it worked out. You could try removing the tiller pod and running helm reset --force as well.

@igorlimansky

This comment has been minimized.

Copy link

igorlimansky commented Dec 19, 2018

upgrading to 2.12.1 and doing helm reset --force fixed the issue

@isnellfeikema-isp

This comment has been minimized.

Copy link

isnellfeikema-isp commented Dec 20, 2018

Upgrading to helm/tiller 2.12.1 resolved the issue for me as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment