Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helm init fails on Kubernetes 1.16.0 #6374

Closed
jckasper opened this issue Sep 6, 2019 · 74 comments
Labels
bug
Milestone

Comments

@jckasper
Copy link

@jckasper jckasper commented Sep 6, 2019

Output of helm version: v2.14.3
Output of kubectl version: client: v1.15.3, server: v1.16.0-rc.1
Cloud Provider/Platform (AKS, GKE, Minikube etc.): IBM Cloud Kubernetes Service

$ helm init --service-account tiller
$HELM_HOME has been configured at /Users/xxxx/.helm.
Error: error installing: the server could not find the requested resource

$ helm init --debug --service-account tiller
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
. 
.
.

Looks like helm is trying to create tiller Deployment with: apiVersion: extensions/v1beta1
According to: https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16
that is no longer supported.

@bacongobbler bacongobbler added the bug label Sep 6, 2019
@bacongobbler bacongobbler added this to the 2.15.0 milestone Sep 6, 2019
@bacongobbler

This comment has been minimized.

Copy link
Member

@bacongobbler bacongobbler commented Sep 6, 2019

We've avoided updating tiller to apps/v1 in the past due to complexity with having helm init --upgrade reconciling both extensions/v1beta1 and apps/v1 tiller Deployments. It looks like once we start supporting Kubernetes 1.16.0 we will have to handle that case going forward and migrate to the newer apiVersion.

@mattymo

This comment has been minimized.

Copy link

@mattymo mattymo commented Sep 7, 2019

Here's a short-term workaround:

helm init --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -

Actually, it's not good enough. I still get an error:

error validating data: ValidationError(Deployment.spec): missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec

This can be patched in with:

/usr/local/bin/kubectl patch --local -oyaml -f - -p '{"spec":{"selector": {"app":"helm","name":"tiller"}}}'
@bacongobbler

This comment has been minimized.

Copy link
Member

@bacongobbler bacongobbler commented Sep 7, 2019

Nice! You might be able to achieve the same effect with the --override flag than crazy sed hacks :)

@loxal

This comment has been minimized.

Copy link

@loxal loxal commented Sep 19, 2019

Nice! You might be able to achieve the same effect with the --override flag than crazy sed hacks :)

Yes, but his crazy sed hacks I can copy & paste, whereas this helm init --override "apiVersion"="apps/v1" just does not work. Ok, the sed hack does not work either.

@jbrette

This comment has been minimized.

Copy link
Contributor

@jbrette jbrette commented Sep 19, 2019

current workaround seems to be something like this:

helm init --output yaml > tiller.yaml
and update the tiller.yaml:

  • change to apps/v1
  • add the selector field
---
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  replicas: 1
  strategy: {}
  selector:
    matchLabels:
      app: helm
      name: tiller
....
@PierreF

This comment has been minimized.

Copy link

@PierreF PierreF commented Sep 19, 2019

The following sed works-for-me:

helm init --service-account tiller --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | sed 's@  replicas: 1@  replicas: 1\n  selector: {"matchLabels": {"app": "helm", "name": "tiller"}}@' | kubectl apply -f -

The issue with @mattymo solution (using kubectl patch --local) is that is seems to not work when its input contains multiple resource (here a Deployment and a Service).

@jckasper jckasper changed the title Helm init fails on Kubernetes 1.16.0-beta.1 Helm init fails on Kubernetes 1.16.0-rc.1 Sep 19, 2019
@jckasper jckasper changed the title Helm init fails on Kubernetes 1.16.0-rc.1 Helm init fails on Kubernetes 1.16.0 Sep 19, 2019
@jckasper

This comment has been minimized.

Copy link
Author

@jckasper jckasper commented Sep 19, 2019

Kubernetes 1.16.0 was release yesterday: 9/18/2018.
Helm is broken on this latest Kubernetes release unless the above work around is used.

When will this issue be fixed and when will Helm 2.15.0 be released ?

@mihivagyok

This comment has been minimized.

Copy link

@mihivagyok mihivagyok commented Sep 20, 2019

If you want to use one less sed :)
helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -

Thanks!

@gm12367

This comment has been minimized.

Copy link

@gm12367 gm12367 commented Sep 20, 2019

Today I met the same issue, I changed the label by myself. I change the label to apps/v1 and add selector part,as of now it perform great, below is my yaml file:

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  replicas: 1
  strategy: {}
  selector:
    matchLabels:
      app: helm
      name: tiller
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: helm
        name: tiller
    spec:
      automountServiceAccountToken: true
      containers:
      - env:
        - name: TILLER_NAMESPACE
          value: kube-system
        - name: TILLER_HISTORY_MAX
          value: "0"
        image: gcr.io/kubernetes-helm/tiller:v2.14.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /liveness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        name: tiller
        ports:
        - containerPort: 44134
          name: tiller
        - containerPort: 44135
          name: http
        readinessProbe:
          httpGet:
            path: /readiness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        resources: {}
      serviceAccountName: tiller
status: {}

@ntwrkguru

This comment has been minimized.

Copy link

@ntwrkguru ntwrkguru commented Sep 20, 2019

@jbrette you are my hero! I was struggling with the selector stanza.

@puww1010

This comment has been minimized.

Copy link

@puww1010 puww1010 commented Sep 22, 2019

Today I met the same issue, I changed the label by myself. I change the label to apps/v1 and add selector part,as of now it perform great, below is my yaml file:

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  replicas: 1
  strategy: {}
  selector:
    matchLabels:
      app: helm
      name: tiller
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: helm
        name: tiller
    spec:
      automountServiceAccountToken: true
      containers:
      - env:
        - name: TILLER_NAMESPACE
          value: kube-system
        - name: TILLER_HISTORY_MAX
          value: "0"
        image: gcr.io/kubernetes-helm/tiller:v2.14.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /liveness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        name: tiller
        ports:
        - containerPort: 44134
          name: tiller
        - containerPort: 44135
          name: http
        readinessProbe:
          httpGet:
            path: /readiness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        resources: {}
      serviceAccountName: tiller
status: {}

how to change ? can you describe more details?

@puww1010

This comment has been minimized.

Copy link

@puww1010 puww1010 commented Sep 22, 2019

Today I met the same issue, I changed the label by myself. I change the label to apps/v1 and add selector part,as of now it perform great, below is my yaml file:

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  replicas: 1
  strategy: {}
  selector:
    matchLabels:
      app: helm
      name: tiller
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: helm
        name: tiller
    spec:
      automountServiceAccountToken: true
      containers:
      - env:
        - name: TILLER_NAMESPACE
          value: kube-system
        - name: TILLER_HISTORY_MAX
          value: "0"
        image: gcr.io/kubernetes-helm/tiller:v2.14.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /liveness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        name: tiller
        ports:
        - containerPort: 44134
          name: tiller
        - containerPort: 44135
          name: http
        readinessProbe:
          httpGet:
            path: /readiness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        resources: {}
      serviceAccountName: tiller
status: {}

@gm12367 how to change and can you describe more details?

@gm12367

This comment has been minimized.

Copy link

@gm12367 gm12367 commented Sep 22, 2019

Today I met the same issue, I changed the label by myself. I change the label to apps/v1 and add selector part,as of now it perform great, below is my yaml file:

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  replicas: 1
  strategy: {}
  selector:
    matchLabels:
      app: helm
      name: tiller
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: helm
        name: tiller
    spec:
      automountServiceAccountToken: true
      containers:
      - env:
        - name: TILLER_NAMESPACE
          value: kube-system
        - name: TILLER_HISTORY_MAX
          value: "0"
        image: gcr.io/kubernetes-helm/tiller:v2.14.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /liveness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        name: tiller
        ports:
        - containerPort: 44134
          name: tiller
        - containerPort: 44135
          name: http
        readinessProbe:
          httpGet:
            path: /readiness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        resources: {}
      serviceAccountName: tiller
status: {}

@gm12367 how to change and can you describe more details?

For example, you can use helm init --service-account tiller --tiller-namespace kube-system --debug to print YAML-format manifests, --debug option will do this

@puww1010

This comment has been minimized.

Copy link

@puww1010 puww1010 commented Sep 22, 2019

@gm12367 Yes, I can see the print but just output. So, what command I can change the output?

@puww1010

This comment has been minimized.

Copy link

@puww1010 puww1010 commented Sep 22, 2019

@gm12367 I want to change apps/v1 and add selector part

@gm12367

This comment has been minimized.

Copy link

@gm12367 gm12367 commented Sep 22, 2019

@puww1010 I just redirected the output in a file, and then used VIM to change it. Below commands as reference.

# helm init --service-account tiller --tiller-namespace kube-system --debug >> helm-init.yaml
# vim helm-init.yaml
# kubectl apply -f helm-init.yaml
@jbrette

This comment has been minimized.

Copy link
Contributor

@jbrette jbrette commented Sep 22, 2019

if your go environment is set up and you can't wait until the following PR which fixes this issue [Helm init compatible with Kubernetes 1.16] #6462 is merged, you can always do:

Build

mkdir p ${GOPATH}/src/k8s.io
cd ${GOPATH}/src/k8s.io 
git clone -b kube16 https://github.com/keleustes/helm.git
cd helm
make bootstrap build

Test:

kubectl version

Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:27:17Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
/bin/helm init --wait --tiller-image gcr.io/kubernetes-helm/tiller:v2.14.3
Creating /home/xxx/.helm
Creating /home/xxx/.helm/repository
Creating /home/xxx/.helm/repository/cache
Creating /home/xxx/.helm/repository/local
Creating /home/xxx/.helm/plugins
Creating /home/xxx/.helm/starters
Creating /home/xxx/.helm/cache/archive
Creating /home/xxx/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /home/xxx/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
kubectl get deployment.apps/tiller-deploy -n kube-system -o yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: "2019-09-22T01:01:11Z"
  generation: 1
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
  resourceVersion: "553"
  selfLink: /apis/apps/v1/namespaces/kube-system/deployments/tiller-deploy
  uid: 124001ca-6f31-417e-950b-2452ce70f522
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: helm
      name: tiller
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: helm
        name: tiller
    spec:
      automountServiceAccountToken: true
      containers:
      - env:
        - name: TILLER_NAMESPACE
          value: kube-system
        - name: TILLER_HISTORY_MAX
          value: "0"
        image: gcr.io/kubernetes-helm/tiller:v2.14.3
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /liveness
            port: 44135
            scheme: HTTP
          initialDelaySeconds: 1
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: tiller
        ports:
        - containerPort: 44134
          name: tiller
          protocol: TCP
        - containerPort: 44135
          name: http
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readiness
            port: 44135
            scheme: HTTP
          initialDelaySeconds: 1
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2019-09-22T01:01:23Z"
    lastUpdateTime: "2019-09-22T01:01:23Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2019-09-22T01:01:11Z"
    lastUpdateTime: "2019-09-22T01:01:23Z"
    message: ReplicaSet "tiller-deploy-568db6b69f" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 1
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1
@puww1010

This comment has been minimized.

Copy link

@puww1010 puww1010 commented Sep 22, 2019

@jbrette Still having the same issue after following your instruction

@jbrette

This comment has been minimized.

Copy link
Contributor

@jbrette jbrette commented Sep 22, 2019

@jbrette Still having the same issue after following your instruction

Looks like you typed "helm" instead of "./bin/helm"....so you are using the old version of the binary.

@uniuuu

This comment has been minimized.

Copy link

@uniuuu uniuuu commented Sep 22, 2019

After successful init you won't be able to install a chart package from repository until replacing extensions/v1beta1 in it as well.
Here is how to adapt any chart from repository for k8s v1.16.0
The example is based on prometheus chart.

git clone https://github.com/helm/charts
cd charts/stable

Replace extensions/v1beta1 to policy/v1beta1 PodSecurityPolicy:

sed -i 's@apiVersion: extensions/v1beta1@apiVersion: policy/v1beta1@' `find . -iregex ".*yaml\|.*yml" -exec awk '/kind:\s+PodSecurityPolicy/ {print FILENAME}' {} +`

NetworkPolicy apiVersion is handled well by _helpers.tpl for those charts where it is used.

Replace extensions/v1beta1 to apps/v1 in Deployment, StatefulSet, ReplicaSet, DaemonSet

sed -i 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' `find . -iregex ".*yaml\|.*yml" -exec awk '/kind:\s+(Deployment|StatefulSet|ReplicaSet|DaemonSet)/ {print FILENAME}' {} +`
sed -i 's@apiVersion: apps/v1beta2@apiVersion: apps/v1@' `find . -iregex ".*yaml\|.*yml" -exec awk '/kind:\s+(Deployment|StatefulSet|ReplicaSet|DaemonSet)/ {print FILENAME}' {} +`

Create a new package:

helm package ./prometheus
Successfully packaged chart and saved it to: /home/vagrant/charts/stable/prometheus-9.1.1.tgz

Install it:
helm install /home/vagrant/charts/stable/prometheus-9.1.1.tgz

Based on https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/

P.S. For some charts with dependencies you might need to use helm dependency update and replace dependent tgz with patched ones if applicable.

@c0debreaker

This comment has been minimized.

Copy link

@c0debreaker c0debreaker commented Sep 22, 2019

Getting the same error when running helm init --history-max 200

output

$HELM_HOME has been configured at /Users/neil/.helm.
Error: error installing: the server could not find the requested resource
$ helm version
Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Error: could not find tiller
@Flood1993

This comment has been minimized.

Copy link

@Flood1993 Flood1993 commented Oct 10, 2019

I'm also getting the error:

$ helm init
$HELM_HOME has been configured at C:\Users\user\.helm.
Error: error installing: the server could not find the requested resource

I'm trying a solution proposed in this issue, particularly this one. However, after modifying the tiller.yaml file accordingly, I'm not able to update the configuration. I'm trying the following command in order to apply the changes/update the configuration:

$ kubectl apply -f tiller.yaml
deployment.apps/tiller-deploy configured
service/tiller-deploy configured

But then, if I run:

$ helm init --output yaml > tiller2.yaml

The tiller2.yaml file shows:

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  replicas: 1
  strategy: {}
  template:

Basically, the changes are not reflected. So I assume that I'm not updating the configuration properly. What would be the correct way to do it?


EDIT: I managed to get it running. I'm using Minikube, and in order to get it running, first I downgraded the Kubernetes version to 1.15.4.

minikube delete
minikube start --kubernetes-version=1.15.4

Then, I was using a proxy, so I had to add Minikube's IP to the NO_PROXY list: 192.168.99.101 in my case. See: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/

Note: After some further testing, perhaps the downgrade is not necessary, and maybe all I was missing was the NO_PROXY step. I added all 192.168.99.0/24, 192.168.39.0/24 and 10.96.0.0/12 to the NO_PROXY setting and now it seems to work fine.

@santoshr1016

This comment has been minimized.

Copy link

@santoshr1016 santoshr1016 commented Oct 13, 2019

helm init --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -

Its worked for me, Thank you so much

@peterwilliam860

This comment has been minimized.

Copy link

@peterwilliam860 peterwilliam860 commented Oct 14, 2019

As the Kubernetes API evolves, APIs are periodically reorganized or upgraded. When APIs evolve, the old API is deprecated and eventually removed.

The v1.16 release will stop serving the following deprecated API versions in favor of newer and more stable API versions:

NetworkPolicy (in the extensions/v1beta1 API group)
    Migrate to use the networking.k8s.io/v1 API, available since v1.8. Existing persisted data can be retrieved/updated via the networking.k8s.io/v1 API.
PodSecurityPolicy (in the extensions/v1beta1 API group)
    Migrate to use the policy/v1beta1 API, available since v1.10. Existing persisted data can be retrieved/updated via the policy/v1beta1 API.
DaemonSet, Deployment, StatefulSet, and ReplicaSet (in the extensions/v1beta1 and apps/v1beta2 API groups)
    Migrate to use the apps/v1 API, available since v1.9. Existing persisted data can be retrieved/updated via the apps/v1 API.

The v1.20 release will stop serving the following deprecated API versions in favor of newer and more stable API versions:

Ingress (in the extensions/v1beta1 API group)
    Migrate to use the networking.k8s.io/v1beta1 API, serving Ingress since v1.14. Existing persisted data can be retrieved/updated via the networking.k8s.io/v1beta1 API.

##What to Do

  • Change YAML files to reference the newer APIs
  • Update custom integrations and controllers to call the newer APIs
  • Update third party tools (ingress controllers, continuous delivery systems) to call the newer APIs

Refer to :

@joshprzybyszewski-wf

This comment has been minimized.

Copy link

@joshprzybyszewski-wf joshprzybyszewski-wf commented Oct 16, 2019

As a helm n00b who is using minikube, I was able to get around this issue by setting a kubernetes version like so:

$ minikube delete
$ minikube start --kubernetes-version=1.15.4

Hope it helps!

@pierluigilenoci

This comment has been minimized.

Copy link

@pierluigilenoci pierluigilenoci commented Oct 21, 2019

@PierreF I used your solution ( #6374 (comment) ) with k8s v1.16.1 and helm v2.15.0 and the tiller is not working.

Readiness probe failed: Get http://10.238.128.95:44135/readiness: dial tcp 10.238.128.95:44135: connect: connection refused
@dumindu

This comment has been minimized.

Copy link

@dumindu dumindu commented Oct 22, 2019

@joshprzybyszewski-wf I used following command

minikube start --memory=16384 --cpus=4 --kubernetes-version=1.15.4
kubectl create -f istio-1.3.3/install/kubernetes/helm/helm-service-account.yaml
helm init --service-account tiller
helm install istio-1.3.3/install/kubernetes/helm/istio-init --name istio-init --namespace istio-system
helm install istio-1.3.3/install/kubernetes/helm/istio --name istio --namespace istio-system

And now get,

Error: validation failed: [unable to recognize "": no matches for kind "DestinationRule" in version "networking.istio.io/v1alpha3", unable to recognize "": no matches for kind "DestinationRule" in version "networking.istio.io/v1alpha3", unable to recognize "": no matches for kind "attributemanifest" in version "config.istio.io/v1alpha2", unable to recognize "": no matches for kind "attributemanifest" in version "config.istio.io/v1alpha2", unable to recognize "": no matches for kind "handler" in version "config.istio.io/v1alpha2", unable to recognize "": no matches for kind "handler" in version "config.istio.io/v1alpha2", unable to recognize "": no matches for kind "instance" in version "config.istio.io/v1alpha2", unable to recognize "": no matches for kind "instance" in version "config.istio.io/v1alpha2", unable to recognize "": no matches for kind "instance" in version "config.istio.io/v1alpha2", unable to recognize "": no matches for kind "instance" in version "config.istio.io/v1alpha2", unable to recognize "": no matches for kind "instance" in version "config.istio.io/v1alpha2", unable to recognize "": no matches for kind "instance" in version "config.istio.io/v1alpha2", unable to recognize "": no matches for kind "instance" in version "config.istio.io/v1alpha2", unable to recognize "": no matches for kind "instance" in version "config.istio.io/v1alpha2", unable to recognize "": no matches for kind "instance" in version "config.istio.io/v1alpha2", unable to recognize "": no matches for kind "rule" in version "config.istio.io/v1alpha2", unable to recognize "": no matches for kind "rule" in version "config.istio.io/v1alpha2", unable to recognize "": no matches for kind "rule" in version "config.istio.io/v1alpha2", unable to recognize "": no matches for kind "rule" in version "config.istio.io/v1alpha2", unable to recognize "": no matches for kind "rule" in version "config.istio.io/v1alpha2", unable to recognize "": no matches for kind "rule" in version "config.istio.io/v1alpha2"]
@ChaturvediSulabh

This comment has been minimized.

Copy link

@ChaturvediSulabh ChaturvediSulabh commented Nov 4, 2019

Here's a short-term workaround:

helm init --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl apply -f -

Actually, it's not good enough. I still get an error:

error validating data: ValidationError(Deployment.spec): missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec

This can be patched in with:

/usr/local/bin/kubectl patch --local -oyaml -f - -p '{"spec":{"selector": {"app":"helm","name":"tiller"}}}'

you missed to add macthLabels post selector.

@c0debreaker

This comment has been minimized.

Copy link

@c0debreaker c0debreaker commented Nov 7, 2019

I was forwarded to @jbrette 's solution. This is what I got when I ran it

error: error parsing STDIN: error converting YAML to JSON: yaml: line 11: mapping values are not allowed in this context
@kurczynski

This comment has been minimized.

Copy link

@kurczynski kurczynski commented Nov 7, 2019

This has been fixed in Helm 2.16.0.

@peterwilliam860

This comment has been minimized.

Copy link

@peterwilliam860 peterwilliam860 commented Nov 7, 2019

I was forwarded to @jbrette 's solution. This is what I got when I ran it

error: error parsing STDIN: error converting YAML to JSON: yaml: line 11: mapping values are not allowed in this context

Check the yaml files, in most cases that referenced line has { } or [ ] and still has other things defined under it which causes the error. In most cases the issue is within the values.yaml, otherwise check the templates section of the chart.

@derba

This comment has been minimized.

Copy link

@derba derba commented Nov 8, 2019

Just a side note to @PierreF's and @mihivagyok's solution. Those did not work for me when I use private helm repos.

$ helm repo add companyrepo https://companyrepo
Error: Couldn't load repositories file (/home/username/.helm/repository/repositories.yaml).

I guess that happens because helm init is not run, just generates yaml file. I fixed that by running helm init -c as an extra.

BjoernT added a commit to BjoernT/kubespray that referenced this issue Nov 11, 2019
Since upgrading k8s beyond 1.16.0 version, helm init does
no longer work with helm < 2.16.0 due to
helm/helm#6374

This PR closes isseu kubernetes-sigs#5331
BjoernT added a commit to BjoernT/kubespray that referenced this issue Nov 11, 2019
Since upgrading k8s beyond 1.16.0 version, helm init does
no longer work with helm < 2.16.0 due to
helm/helm#6374

This PR closes issue kubernetes-sigs#5331
k8s-ci-robot added a commit to kubernetes-sigs/kubespray that referenced this issue Nov 11, 2019
Since upgrading k8s beyond 1.16.0 version, helm init does
no longer work with helm < 2.16.0 due to
helm/helm#6374

This PR closes issue #5331
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
You can’t perform that action at this time.