Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helm init fails on Kubernetes 1.16.0 #6374

Closed
jckasper opened this issue Sep 6, 2019 · 83 comments
Closed

Helm init fails on Kubernetes 1.16.0 #6374

jckasper opened this issue Sep 6, 2019 · 83 comments
Labels
bug Categorizes issue or PR as related to a bug.
Milestone

Comments

@jckasper
Copy link

jckasper commented Sep 6, 2019

Output of helm version: v2.14.3
Output of kubectl version: client: v1.15.3, server: v1.16.0-rc.1
Cloud Provider/Platform (AKS, GKE, Minikube etc.): IBM Cloud Kubernetes Service

$ helm init --service-account tiller
$HELM_HOME has been configured at /Users/xxxx/.helm.
Error: error installing: the server could not find the requested resource

$ helm init --debug --service-account tiller
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
. 
.
.

Looks like helm is trying to create tiller Deployment with: apiVersion: extensions/v1beta1
According to: https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16
that is no longer supported.

@bacongobbler bacongobbler added the bug Categorizes issue or PR as related to a bug. label Sep 6, 2019
@bacongobbler bacongobbler added this to the 2.15.0 milestone Sep 6, 2019
@bacongobbler
Copy link
Member

bacongobbler commented Sep 6, 2019

We've avoided updating tiller to apps/v1 in the past due to complexity with having helm init --upgrade reconciling both extensions/v1beta1 and apps/v1 tiller Deployments. It looks like once we start supporting Kubernetes 1.16.0 we will have to handle that case going forward and migrate to the newer apiVersion.

@mattymo
Copy link

mattymo commented Sep 7, 2019

Here's a short-term workaround:

helm init --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -

Actually, it's not good enough. I still get an error:

error validating data: ValidationError(Deployment.spec): missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec

This can be patched in with:

/usr/local/bin/kubectl patch --local -oyaml -f - -p '{"spec":{"selector": {"app":"helm","name":"tiller"}}}'

@bacongobbler
Copy link
Member

bacongobbler commented Sep 7, 2019

Nice! You might be able to achieve the same effect with the --override flag than crazy sed hacks :)

@loxal
Copy link

loxal commented Sep 19, 2019

Nice! You might be able to achieve the same effect with the --override flag than crazy sed hacks :)

Yes, but his crazy sed hacks I can copy & paste, whereas this helm init --override "apiVersion"="apps/v1" just does not work. Ok, the sed hack does not work either.

@jbrette
Copy link

jbrette commented Sep 19, 2019

current workaround seems to be something like this:

helm init --output yaml > tiller.yaml
and update the tiller.yaml:

  • change to apps/v1
  • add the selector field
---
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  replicas: 1
  strategy: {}
  selector:
    matchLabels:
      app: helm
      name: tiller
....

@PierreF
Copy link

PierreF commented Sep 19, 2019

The following sed works-for-me:

helm init --service-account tiller --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | sed 's@  replicas: 1@  replicas: 1\n  selector: {"matchLabels": {"app": "helm", "name": "tiller"}}@' | kubectl apply -f -

The issue with @mattymo solution (using kubectl patch --local) is that is seems to not work when its input contains multiple resource (here a Deployment and a Service).

@jckasper jckasper changed the title Helm init fails on Kubernetes 1.16.0-beta.1 Helm init fails on Kubernetes 1.16.0-rc.1 Sep 19, 2019
@jckasper jckasper changed the title Helm init fails on Kubernetes 1.16.0-rc.1 Helm init fails on Kubernetes 1.16.0 Sep 19, 2019
@jckasper
Copy link
Author

jckasper commented Sep 19, 2019

Kubernetes 1.16.0 was release yesterday: 9/18/2018.
Helm is broken on this latest Kubernetes release unless the above work around is used.

When will this issue be fixed and when will Helm 2.15.0 be released ?

@mihivagyok
Copy link

If you want to use one less sed :)
helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -

Thanks!

@gm12367
Copy link

gm12367 commented Sep 20, 2019

Today I met the same issue, I changed the label by myself. I change the label to apps/v1 and add selector part,as of now it perform great, below is my yaml file:

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  replicas: 1
  strategy: {}
  selector:
    matchLabels:
      app: helm
      name: tiller
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: helm
        name: tiller
    spec:
      automountServiceAccountToken: true
      containers:
      - env:
        - name: TILLER_NAMESPACE
          value: kube-system
        - name: TILLER_HISTORY_MAX
          value: "0"
        image: gcr.io/kubernetes-helm/tiller:v2.14.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /liveness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        name: tiller
        ports:
        - containerPort: 44134
          name: tiller
        - containerPort: 44135
          name: http
        readinessProbe:
          httpGet:
            path: /readiness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        resources: {}
      serviceAccountName: tiller
status: {}

@ntwrkguru
Copy link

@jbrette you are my hero! I was struggling with the selector stanza.

@puww1010
Copy link

Today I met the same issue, I changed the label by myself. I change the label to apps/v1 and add selector part,as of now it perform great, below is my yaml file:

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  replicas: 1
  strategy: {}
  selector:
    matchLabels:
      app: helm
      name: tiller
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: helm
        name: tiller
    spec:
      automountServiceAccountToken: true
      containers:
      - env:
        - name: TILLER_NAMESPACE
          value: kube-system
        - name: TILLER_HISTORY_MAX
          value: "0"
        image: gcr.io/kubernetes-helm/tiller:v2.14.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /liveness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        name: tiller
        ports:
        - containerPort: 44134
          name: tiller
        - containerPort: 44135
          name: http
        readinessProbe:
          httpGet:
            path: /readiness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        resources: {}
      serviceAccountName: tiller
status: {}

how to change ? can you describe more details?

@puww1010
Copy link

Today I met the same issue, I changed the label by myself. I change the label to apps/v1 and add selector part,as of now it perform great, below is my yaml file:

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  replicas: 1
  strategy: {}
  selector:
    matchLabels:
      app: helm
      name: tiller
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: helm
        name: tiller
    spec:
      automountServiceAccountToken: true
      containers:
      - env:
        - name: TILLER_NAMESPACE
          value: kube-system
        - name: TILLER_HISTORY_MAX
          value: "0"
        image: gcr.io/kubernetes-helm/tiller:v2.14.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /liveness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        name: tiller
        ports:
        - containerPort: 44134
          name: tiller
        - containerPort: 44135
          name: http
        readinessProbe:
          httpGet:
            path: /readiness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        resources: {}
      serviceAccountName: tiller
status: {}

@gm12367 how to change and can you describe more details?

@gm12367
Copy link

gm12367 commented Sep 22, 2019

Today I met the same issue, I changed the label by myself. I change the label to apps/v1 and add selector part,as of now it perform great, below is my yaml file:

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  replicas: 1
  strategy: {}
  selector:
    matchLabels:
      app: helm
      name: tiller
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: helm
        name: tiller
    spec:
      automountServiceAccountToken: true
      containers:
      - env:
        - name: TILLER_NAMESPACE
          value: kube-system
        - name: TILLER_HISTORY_MAX
          value: "0"
        image: gcr.io/kubernetes-helm/tiller:v2.14.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /liveness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        name: tiller
        ports:
        - containerPort: 44134
          name: tiller
        - containerPort: 44135
          name: http
        readinessProbe:
          httpGet:
            path: /readiness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        resources: {}
      serviceAccountName: tiller
status: {}

@gm12367 how to change and can you describe more details?

For example, you can use helm init --service-account tiller --tiller-namespace kube-system --debug to print YAML-format manifests, --debug option will do this

@puww1010
Copy link

@gm12367 Yes, I can see the print but just output. So, what command I can change the output?

@puww1010
Copy link

@gm12367 I want to change apps/v1 and add selector part

@gm12367
Copy link

gm12367 commented Sep 22, 2019

@puww1010 I just redirected the output in a file, and then used VIM to change it. Below commands as reference.

# helm init --service-account tiller --tiller-namespace kube-system --debug >> helm-init.yaml
# vim helm-init.yaml
# kubectl apply -f helm-init.yaml

@jbrette
Copy link

jbrette commented Sep 22, 2019

if your go environment is set up and you can't wait until the following PR which fixes this issue [Helm init compatible with Kubernetes 1.16] #6462 is merged, you can always do:

Build

mkdir p ${GOPATH}/src/k8s.io
cd ${GOPATH}/src/k8s.io 
git clone -b kube16 https://github.com/keleustes/helm.git
cd helm
make bootstrap build

Test:

kubectl version

Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:27:17Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
/bin/helm init --wait --tiller-image gcr.io/kubernetes-helm/tiller:v2.14.3
Creating /home/xxx/.helm
Creating /home/xxx/.helm/repository
Creating /home/xxx/.helm/repository/cache
Creating /home/xxx/.helm/repository/local
Creating /home/xxx/.helm/plugins
Creating /home/xxx/.helm/starters
Creating /home/xxx/.helm/cache/archive
Creating /home/xxx/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /home/xxx/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
kubectl get deployment.apps/tiller-deploy -n kube-system -o yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: "2019-09-22T01:01:11Z"
  generation: 1
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
  resourceVersion: "553"
  selfLink: /apis/apps/v1/namespaces/kube-system/deployments/tiller-deploy
  uid: 124001ca-6f31-417e-950b-2452ce70f522
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: helm
      name: tiller
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: helm
        name: tiller
    spec:
      automountServiceAccountToken: true
      containers:
      - env:
        - name: TILLER_NAMESPACE
          value: kube-system
        - name: TILLER_HISTORY_MAX
          value: "0"
        image: gcr.io/kubernetes-helm/tiller:v2.14.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /liveness
            port: 44135
            scheme: HTTP
          initialDelaySeconds: 1
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: tiller
        ports:
        - containerPort: 44134
          name: tiller
          protocol: TCP
        - containerPort: 44135
          name: http
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readiness
            port: 44135
            scheme: HTTP
          initialDelaySeconds: 1
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2019-09-22T01:01:23Z"
    lastUpdateTime: "2019-09-22T01:01:23Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2019-09-22T01:01:11Z"
    lastUpdateTime: "2019-09-22T01:01:23Z"
    message: ReplicaSet "tiller-deploy-568db6b69f" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 1
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

@puww1010
Copy link

@jbrette Still having the same issue after following your instruction

@jbrette
Copy link

jbrette commented Sep 22, 2019

@jbrette Still having the same issue after following your instruction

Looks like you typed "helm" instead of "./bin/helm"....so you are using the old version of the binary.

@uniuuu
Copy link

uniuuu commented Sep 22, 2019

After successful init you won't be able to install a chart package from repository until replacing extensions/v1beta1 in it as well.
Here is how to adapt any chart from repository for k8s v1.16.0
The example is based on prometheus chart.

git clone https://github.com/helm/charts
cd charts/stable

Replace extensions/v1beta1 to policy/v1beta1 PodSecurityPolicy:

sed -i 's@apiVersion: extensions/v1beta1@apiVersion: policy/v1beta1@' `find . -iregex ".*yaml\|.*yml" -exec awk '/kind:\s+PodSecurityPolicy/ {print FILENAME}' {} +`

NetworkPolicy apiVersion is handled well by _helpers.tpl for those charts where it is used.

Replace extensions/v1beta1 to apps/v1 in Deployment, StatefulSet, ReplicaSet, DaemonSet

sed -i 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' `find . -iregex ".*yaml\|.*yml" -exec awk '/kind:\s+(Deployment|StatefulSet|ReplicaSet|DaemonSet)/ {print FILENAME}' {} +`
sed -i 's@apiVersion: apps/v1beta2@apiVersion: apps/v1@' `find . -iregex ".*yaml\|.*yml" -exec awk '/kind:\s+(Deployment|StatefulSet|ReplicaSet|DaemonSet)/ {print FILENAME}' {} +`

Create a new package:

helm package ./prometheus
Successfully packaged chart and saved it to: /home/vagrant/charts/stable/prometheus-9.1.1.tgz

Install it:
helm install /home/vagrant/charts/stable/prometheus-9.1.1.tgz

Based on https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/

P.S. For some charts with dependencies you might need to use helm dependency update and replace dependent tgz with patched ones if applicable.

@c0debreaker
Copy link

Getting the same error when running helm init --history-max 200

output

$HELM_HOME has been configured at /Users/neil/.helm.
Error: error installing: the server could not find the requested resource
$ helm version
Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Error: could not find tiller

@0xwilliampeter
Copy link

0xwilliampeter commented Nov 7, 2019

I was forwarded to @jbrette 's solution. This is what I got when I ran it

error: error parsing STDIN: error converting YAML to JSON: yaml: line 11: mapping values are not allowed in this context

Check the yaml files, in most cases that referenced line has { } or [ ] and still has other things defined under it which causes the error. In most cases the issue is within the values.yaml, otherwise check the templates section of the chart.

@derba
Copy link

derba commented Nov 8, 2019

Just a side note to @PierreF's and @mihivagyok's solution. Those did not work for me when I use private helm repos.

$ helm repo add companyrepo https://companyrepo
Error: Couldn't load repositories file (/home/username/.helm/repository/repositories.yaml).

I guess that happens because helm init is not run, just generates yaml file. I fixed that by running helm init -c as an extra.

BjoernT added a commit to BjoernT/kubespray that referenced this issue Nov 11, 2019
Since upgrading k8s beyond 1.16.0 version, helm init does
no longer work with helm < 2.16.0 due to
helm/helm#6374

This PR closes issue kubernetes-sigs#5331
k8s-ci-robot pushed a commit to kubernetes-sigs/kubespray that referenced this issue Nov 11, 2019
Since upgrading k8s beyond 1.16.0 version, helm init does
no longer work with helm < 2.16.0 due to
helm/helm#6374

This PR closes issue #5331
LuckySB pushed a commit to southbridgeio/kubespray that referenced this issue Dec 9, 2019
Since upgrading k8s beyond 1.16.0 version, helm init does
no longer work with helm < 2.16.0 due to
helm/helm#6374

This PR closes issue kubernetes-sigs#5331
@WoodProgrammer
Copy link

in k8s v1.16.6, helm init otput require spec.selector fyi .

@caodangtinh
Copy link

current workaround seems to be something like this:

helm init --output yaml > tiller.yaml
and update the tiller.yaml:

  • change to apps/v1
  • add the selector field
---
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  replicas: 1
  strategy: {}
  selector:
    matchLabels:
      app: helm
      name: tiller
....

It's works, since kubernetes change apiVersion apps/v1 to for Deployment, there is one thing need to be change is we need to add selector matchLabels for spec

@VGerris
Copy link

VGerris commented Jun 1, 2020

Another workaround can be to use helm 3, which does not use tiller.

@manaschandrasahoo
Copy link

helm init --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -

Hi, while trying this am getting this :

jenkins@jenkin:~/.kube$ helm init --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -

Command 'kubectl' not found, but can be installed with:

snap install kubectl
Please ask your administrator.

jenkins@jenkin:~/.kube$

@manaschandrasahoo
Copy link

Output of helm version: v2.14.3
Output of kubectl version: client: v1.15.3, server: v1.16.0-rc.1
Cloud Provider/Platform (AKS, GKE, Minikube etc.): IBM Cloud Kubernetes Service

$ helm init --service-account tiller
$HELM_HOME has been configured at /Users/xxxx/.helm.
Error: error installing: the server could not find the requested resource

$ helm init --debug --service-account tiller
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
. 
.
.

Looks like helm is trying to create tiller Deployment with: apiVersion: extensions/v1beta1
According to: https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16
that is no longer supported.

am getting this error: how can I solve it???

root@jenkin:# helm init --service-account tiller
$HELM_HOME has been configured at /root/.helm.
Error: error installing: unknown (post deployments.extensions)
root@jenkin:
#

@manaschandrasahoo
Copy link

Here's a short-term workaround:

helm init --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -

Actually, it's not good enough. I still get an error:

error validating data: ValidationError(Deployment.spec): missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec

This can be patched in with:

/usr/local/bin/kubectl patch --local -oyaml -f - -p '{"spec":{"selector": {"app":"helm","name":"tiller"}}}'

I am getting this error :

jenkins@jenkin:~/.helm$ helm init --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -

Command 'kubectl' not found, but can be installed with:

snap install kubectl
Please ask your administrator.

jenkins@jenkin:~/.helm$

@StephanX
Copy link

Workaround, using jq:

helm init -o json | jq '(select(.apiVersion == "extensions/v1beta1") .apiVersion = "apps/v1")' | jq '(select(.kind == "Deployment") .spec.selector.matchLabels.app = "helm")' | jq '(select(.kind == "Deployment") .spec.selector.matchLabels.name = "tiller")' | kubectl create -f -

@ikarlashov
Copy link

Workaround, using jq:

helm init -o json | jq '(select(.apiVersion == "extensions/v1beta1") .apiVersion = "apps/v1")' | jq '(select(.kind == "Deployment") .spec.selector.matchLabels.app = "helm")' | jq '(select(.kind == "Deployment") .spec.selector.matchLabels.name = "tiller")' | kubectl create -f -

You can't update resource with kubectl create

@StephanX
Copy link

StephanX commented Jul 30, 2020

@ikarlashov easy enough to replace 'create' with 'apply.' The one-liner above presumes one hasn't tried creating the resources yet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests