Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helm says tiller is installed AND could not find tiller #4685

Closed
mabushey opened this issue Sep 21, 2018 · 32 comments
Closed

Helm says tiller is installed AND could not find tiller #4685

mabushey opened this issue Sep 21, 2018 · 32 comments

Comments

@mabushey
Copy link

@mabushey mabushey commented Sep 21, 2018

helm init --service-account tiller
$HELM_HOME has been configured at /home/ubuntu/.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)
Happy Helming!

Output of helm version:
$ helm version
Client: &version.Version{SemVer:"v2.10.0", GitCommit:"9ad53aac42165a5fadc6c87be0dea6b115f93090", GitTreeState:"clean"}
Error: could not find tiller

Output of kubectl version:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T17:53:03Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}

Cloud Provider/Platform (AKS, GKE, Minikube etc.):
AWS / Kops

@bacongobbler

This comment has been minimized.

Copy link
Member

@bacongobbler bacongobbler commented Sep 21, 2018

What's the output of kubectl -n kube-system get pods?

helm init only checks that the deployment manifest was submitted to kubernetes. If you want to check for if tiller is live and ready, use helm init --wait. :)

@psychemedia

This comment has been minimized.

Copy link

@psychemedia psychemedia commented Sep 25, 2018

I'm getting the Error: could not find tiller message too, using Kubernetes under Docker for Desktop (Mac).

helm version
Client: &version.Version{SemVer:"v2.10.0", GitCommit:"9ad53aac42165a5fadc6c87be0dea6b115f93090", GitTreeState:"clean"}
Error: could not find tiller

Running kubectl -n kube-system get pods on context docker-for-desktop gives me:

etcd-docker-for-desktop                      1/1       Running   1          8m
kube-apiserver-docker-for-desktop            1/1       Running   1          8m
kube-controller-manager-docker-for-desktop   1/1       Running   1          8m
kube-dns-86f4d74b45-t8pq8                    3/3       Running   0          11m
kube-proxy-d6c4q                             1/1       Running   0          9m
kube-scheduler-docker-for-desktop            1/1       Running   1          8m
@mabushey

This comment has been minimized.

Copy link
Author

@mabushey mabushey commented Sep 26, 2018

$ helm init --wait
$HELM_HOME has been configured at /home/ubuntu/.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)

Just hangs... I hit ctrl-c after 2 minutes

$ kubectl -n kube-system get pods
NAME                                                                 READY     STATUS    RESTARTS   AGE
coredns-59558d567-6qgbv                                              1/1       Running   0          7d
coredns-59558d567-s6w7t                                              1/1       Running   0          7d
dns-controller-b76dfc754-f9vlj                                       1/1       Running   0          7d
etcd-server-events-ip-10-132-1-49.us-west-2.compute.internal         1/1       Running   3          7d
etcd-server-events-ip-10-132-2-171.us-west-2.compute.internal        1/1       Running   0          7d
etcd-server-events-ip-10-132-3-80.us-west-2.compute.internal         1/1       Running   0          7d
etcd-server-ip-10-132-1-49.us-west-2.compute.internal                1/1       Running   3          7d
etcd-server-ip-10-132-2-171.us-west-2.compute.internal               1/1       Running   0          7d
etcd-server-ip-10-132-3-80.us-west-2.compute.internal                1/1       Running   0          7d
kube-apiserver-ip-10-132-1-49.us-west-2.compute.internal             1/1       Running   1          7d
kube-apiserver-ip-10-132-2-171.us-west-2.compute.internal            1/1       Running   1          7d
kube-apiserver-ip-10-132-3-80.us-west-2.compute.internal             1/1       Running   1          7d
kube-controller-manager-ip-10-132-1-49.us-west-2.compute.internal    1/1       Running   0          7d
kube-controller-manager-ip-10-132-2-171.us-west-2.compute.internal   1/1       Running   0          7d
kube-controller-manager-ip-10-132-3-80.us-west-2.compute.internal    1/1       Running   0          7d
kube-proxy-ip-10-132-1-103.us-west-2.compute.internal                1/1       Running   0          7d
kube-proxy-ip-10-132-1-49.us-west-2.compute.internal                 1/1       Running   0          7d
kube-proxy-ip-10-132-2-171.us-west-2.compute.internal                1/1       Running   0          7d
kube-proxy-ip-10-132-2-175.us-west-2.compute.internal                1/1       Running   0          7d
kube-proxy-ip-10-132-3-115.us-west-2.compute.internal                1/1       Running   0          7d
kube-proxy-ip-10-132-3-80.us-west-2.compute.internal                 1/1       Running   0          7d
kube-scheduler-ip-10-132-1-49.us-west-2.compute.internal             1/1       Running   0          7d
kube-scheduler-ip-10-132-2-171.us-west-2.compute.internal            1/1       Running   0          7d
kube-scheduler-ip-10-132-3-80.us-west-2.compute.internal             1/1       Running   0          7d
@bacongobbler

This comment has been minimized.

Copy link
Member

@bacongobbler bacongobbler commented Sep 26, 2018

Interesting. What about kubectl -n kube-system get deployments? Maybe there's something wrong where new pods aren't getting scheduled. Check the status of that deployment and see if something's up.

@psychemedia

This comment has been minimized.

Copy link

@psychemedia psychemedia commented Sep 26, 2018

If I run helm init --wait on my simple Docker for Desktop k8s setup, it just hangs with no output.

$ helm init --wait
$HELM_HOME has been configured at ~/.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)

Running kubectl -n kube-system get deployments gives:

kubectl -n kube-system get deployments
NAME            DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-dns        1         1         1            1           10h
tiller-deploy   1         0         0            0           10h
@mabushey

This comment has been minimized.

Copy link
Author

@mabushey mabushey commented Sep 26, 2018

$ kubectl -n kube-system get deployments
NAME             DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
coredns          2         2         2            2           7d
dns-controller   1         1         1            1           7d
tiller-deploy    1         0         0            0           7d
@bacongobbler

This comment has been minimized.

Copy link
Member

@bacongobbler bacongobbler commented Sep 26, 2018

Sorry about this. Can you both try kubectl -n kube-system describe deployment tiller-deploy? You'll likely get more information on why a pod is not being scheduled. if not you can try debugging the replica set that the kubernetes deployment deployed (hehe 😄).

https://kubernetes.io/docs/tasks/debug-application-cluster/debug-application/#debugging-replication-controllers

@psychemedia

This comment has been minimized.

Copy link

@psychemedia psychemedia commented Sep 26, 2018

kubectl -n kube-system describe deployment tiller-deploy returns:

kubectl -n kube-system describe deployment tiller-deploy
Name:                   tiller-deploy
Namespace:              kube-system
CreationTimestamp:      Tue, 25 Sep 2018 23:36:14 +0100
Labels:                 app=helm
                        name=tiller
Annotations:            deployment.kubernetes.io/revision=2
Selector:               app=helm,name=tiller
Replicas:               1 desired | 0 updated | 0 total | 0 available | 1 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
Pod Template:
  Labels:           app=helm
                    name=tiller
  Service Account:  tiller
  Containers:
   tiller:
    Image:       gcr.io/kubernetes-helm/tiller:v2.10.0
    Ports:       44134/TCP, 44135/TCP
    Host Ports:  0/TCP, 0/TCP
    Command:
      /tiller
      --listen=localhost:44134
    Liveness:   http-get http://:44135/liveness delay=1s timeout=1s period=10s #success=1 #failure=3
    Readiness:  http-get http://:44135/readiness delay=1s timeout=1s period=10s #success=1 #failure=3
    Environment:
      TILLER_NAMESPACE:    kube-system
      TILLER_HISTORY_MAX:  0
    Mounts:                <none>
  Volumes:                 <none>
Conditions:
  Type             Status  Reason
  ----             ------  ------
  Available        True    MinimumReplicasAvailable
  ReplicaFailure   True    FailedCreate
  Progressing      False   ProgressDeadlineExceeded
OldReplicaSets:    <none>
NewReplicaSet:     tiller-deploy-55bfddb486 (0/1 replicas created)
Events:            <none>
@bacongobbler

This comment has been minimized.

Copy link
Member

@bacongobbler bacongobbler commented Sep 26, 2018

and the replica set? basically go down the list in that doc and see if you find anything useful.

@pc-rshetty

This comment has been minimized.

Copy link

@pc-rshetty pc-rshetty commented Oct 9, 2018

thanks for me i got the same error upon describing the replicaset it gave the error that i had not created the service account. I deleted the deployment(tiller) , created the service account and then reran and it worked

@bacongobbler

This comment has been minimized.

Copy link
Member

@bacongobbler bacongobbler commented Oct 9, 2018

closing as a cluster issue, not a helm issue

@mabushey

This comment has been minimized.

Copy link
Author

@mabushey mabushey commented Oct 25, 2018

The Kubernetes cluster works great, I have numerous services running under it. What doesn't work is helm/tiller.

@mabushey

This comment has been minimized.

Copy link
Author

@mabushey mabushey commented Oct 25, 2018

I used kubectl -n kube-system delete deployment tiller-deploy and kubectl -n kube-system delete service/tiller-deploy. Then helm --init worked. I was missing removing the service previously.

@porrascarlos802018

This comment has been minimized.

Copy link

@porrascarlos802018 porrascarlos802018 commented Dec 19, 2018

mabushey solution , works!

@niekoost

This comment has been minimized.

Copy link

@niekoost niekoost commented Feb 23, 2019

@mabushey solution works, but with helm init instead of helm --init

@a-barbieri

This comment has been minimized.

Copy link

@a-barbieri a-barbieri commented Mar 3, 2019

I came across @psychemedia issue as well.

After running kubectl -n kube-system describe deployment tiller-deploy I had the same output. And if you read carefully @psychemedia output it says

...

Conditions:
  Type             Status  Reason
  ----             ------  ------
  Available        True    MinimumReplicasAvailable
  ReplicaFailure   True    FailedCreate
  Progressing      False   ProgressDeadlineExceeded
OldReplicaSets:    <none>
NewReplicaSet:     tiller-deploy-55bfddb486 (0/1 replicas created)
Events:            <none>

The important bit is ReplicaFailure True FailedCreate and the following NewReplicaSet: tiller-deploy-55bfddb486 (0/1 replicas created).

To find what the problem he should have run

kubectl -n kube-system describe replicaser tiller-deploy-55bfddb486

(or just kubectl describe replicaser tiller-deploy-55bfddb486 depending if namespace is set or not... you can find it by listing all replicasets kubectl get replicaset --all-namespaces).

The reason why the replicaset wasn't created should have been listed there under Events:.

I actually had the same issue running on a different namespace than kube-system.
See #3304 (comment)

@datascienceteam01

This comment has been minimized.

Copy link

@datascienceteam01 datascienceteam01 commented Mar 13, 2019

NOTICE: This ticket shouldn't be closed as there is no published solution to this issue, just a selfish overstatement that the few members of this thread deduced from the ReplicaFailure status and acknowledge tacitly to each other but never provided explicitly to the log. No reproduction/solution steps were published.

@bacongobbler

This comment has been minimized.

Copy link
Member

@bacongobbler bacongobbler commented Mar 13, 2019

This issue was originally closed because there was no steps provided to reproduce the original issue. @mabushey's solution in #4685 (comment) appears to fix the issues he was having with his cluster, but without a series of steps to reproduce the issue, we cannot identify what causes this situation to occur in the first place, and therefore we closed it as a solved support ticket with no actionable resolution.

It's been 6 months since this issue was opened so I doubt we'll be able to figure out the exact steps to reproduce @mabushey and @psychemedia's environment. However, If you can reliably reproduce the issue, please feel free to respond here with your steps so we can better understand how this bug occurs and provide a better solution (or better yet, identify a fix to address the issue). We can then re-open this issue to determine if a patch can be provided.

If you're continuing to have issues and @mabushey's solution in #4685 (comment) does not work for you, please open a new support ticket referencing this issue.

@datascienceteam01

This comment has been minimized.

Copy link

@datascienceteam01 datascienceteam01 commented Mar 13, 2019

@bacongobbler
The problem occurs when Tiller is created without a proper serviceaccount. This happens for two reasons a. the helm init script does not do this as it certainly should b. the namespace in question mismatches with an existing service account definition.

To go arround it you must first run "helm delete" and then create a rbac-config.yaml:

`
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:

  • kind: ServiceAccount
    name: tiller
    namespace: kube-system
    `

if you require to use a different namespace, make sure it later matches your Tiller installation and that the cluster-admin role exists (it usually does!)

Then
$ kubectl create -f rbac-config.yaml
serviceaccount "tiller" created
clusterrolebinding "tiller" created
$ helm init --service-account tiller --history-max 200

And you're good to go.

@tomcanham

This comment has been minimized.

Copy link

@tomcanham tomcanham commented Apr 28, 2019

So I tried to create the service account as described by @datascienceteam01 -- succeeded. Did the helm init --service-account (etc.) -- seemed to succeed.

But the deployment seems to just... spin. No events, notably:

$ kubectl -n kube-system describe deployment tiller-deploy
Name:                   tiller-deploy
Namespace:              kube-system
CreationTimestamp:      Sun, 28 Apr 2019 10:26:24 -0700
Labels:                 app=helm
                        name=tiller
Annotations:            <none>
Selector:               app=helm,name=tiller
Replicas:               1 desired | 0 updated | 0 total | 0 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
Pod Template:
  Labels:           app=helm
                    name=tiller
  Service Account:  tiller
  Containers:
   tiller:
    Image:       gcr.io/kubernetes-helm/tiller:v2.13.1
    Ports:       44134/TCP, 44135/TCP
    Host Ports:  0/TCP, 0/TCP
    Liveness:    http-get http://:44135/liveness delay=1s timeout=1s period=10s #success=1 #failure=3
    Readiness:   http-get http://:44135/readiness delay=1s timeout=1s period=10s #success=1 #failure=3
    Environment:
      TILLER_NAMESPACE:    kube-system
      TILLER_HISTORY_MAX:  200
    Mounts:                <none>
  Volumes:                 <none>
OldReplicaSets:            <none>
NewReplicaSet:             <none>
Events:                    <none>

Sorry to resurrect a dead thread, and the symptoms look a little different. My config is kinda a frankenconfig: Docker Desktop running in Windows 10, helm installed under the Ubuntu shell (Services for Linux). Kubernetes seems happy; I can do all the usual kubectl stuff. I'm just having some problems getting helm init to work.

Any thoughts on how to troubleshoot?

I'm going to try the helm init under windows (if I can figure out how to install helm in windows!) if I can't figure it out under the Ubuntu bash shell, but I'd really like to make sure it's working under the Linux shell, because that's my "real" dev environment.

Also, sorry for deving on Windows. Right now, at least, I have no other options :)

@Tenseiga

This comment has been minimized.

Copy link

@Tenseiga Tenseiga commented May 17, 2019

this issue is a pain for a long time. Thinking to move out of Helm. sigh!!. Every time my pipelines fails because of this.

@karuppiah7890

This comment has been minimized.

Copy link
Contributor

@karuppiah7890 karuppiah7890 commented May 17, 2019

@Tenseiga Could you please elaborate on the issue so that we can help you? May be give us info like helm version output, kubectl version output, and also check anything relating to the tiller pod logs, tiller deployment, tiller replica set using kubectl describe`. We will try our best to fix it!

@tomcanham you could help here too, to reproduce the issue, if you are still facing the issue!

@govindKAG

This comment has been minimized.

Copy link

@govindKAG govindKAG commented Jul 4, 2019

Any update on this?

@infolock

This comment has been minimized.

Copy link

@infolock infolock commented Jul 27, 2019

helm init (without any additional options) was the ticket for me which installed/setup tiller. all is well after that.

@Leekao

This comment has been minimized.

Copy link

@Leekao Leekao commented Aug 8, 2019

This issue happens to me every time I try to switch from cluster to cluster by using "config set-context", the Kubernetes changes context just fine but helm does not, instead it emits "Error: could not find tiller", when I try "helm init" I get "Warning: Tiller is already installed at the cluster".
If I change back the context helm works again. not sure if relevant but the cluster it's working on is PKS and the one it's not working on is EKS.

@nterry

This comment has been minimized.

Copy link

@nterry nterry commented Aug 8, 2019

After a LOT of beating my head into a wall, i figured out why I was seeing this issue... According to the aws documentation for EKS HERE, you set the TILLER_NAMESPACE environment variable to tiller. This was causing the helm binary to deploy tiller in the tiller namespace (go figure).

After unsetting that variable and re-deploying, all was well...

You can also override those settings with command line args documented HERE

HTH

@chrissound

This comment has been minimized.

Copy link

@chrissound chrissound commented Sep 12, 2019

Why is this closed if so many people are having issues with this?

I've opened a question on stackoverflow: https://stackoverflow.com/questions/57906429/helm-init-says-tiller-is-already-on-cluster-but-its-not

@thavlik

This comment has been minimized.

Copy link

@thavlik thavlik commented Sep 13, 2019

Have you all tried:

kubectl apply -f tiller.yaml
helm init --service-account tiller --upgrade

tiller.yaml:

kind: ServiceAccount
apiVersion: v1
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

This is part of my up.sh script for starting my dev cluster from scratch. The --upgrade flag was necessary to allow it to be executed multiple times. I believe the original error about not being able to find tiller is related to it being installed but the tiller-deploy-* pod not being found in kube-system.

@yiarne

This comment has been minimized.

Copy link

@yiarne yiarne commented Sep 23, 2019

Worked for me by following https://helm.sh/docs/using_helm/#tiller-and-role-based-access-control
Just create the yaml and run the command

@chrissound

This comment has been minimized.

Copy link

@chrissound chrissound commented Sep 23, 2019

The point is, the error is misleading. THAT is the issue in my eyes.

@cyrilthank

This comment has been minimized.

Copy link

@cyrilthank cyrilthank commented Sep 27, 2019

i get the below error

kubeflow@masternode:$ helm init --service-account tiller --upgrade
$HELM_HOME has been configured at /home/kubeflow/.helm.
Error: error installing: the server could not find the requested resource
kubeflow@masternode:
$

appreciate any help

@phanimullapudi

This comment has been minimized.

Copy link

@phanimullapudi phanimullapudi commented Oct 4, 2019

kubectl -n kube-system delete deployment tiller-deploy

It works but with small change

use $ helm init

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
You can’t perform that action at this time.