Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Connection refused error when issuing helm install command #3460

Closed
OpusX opened this issue Feb 5, 2018 · 26 comments · Fixed by #3784
Closed

Connection refused error when issuing helm install command #3460

OpusX opened this issue Feb 5, 2018 · 26 comments · Fixed by #3784

Comments

@OpusX
Copy link

OpusX commented Feb 5, 2018

When running "helm list" or a "helm install .... " I am getting the following error:
Error: Get http://localhost:8080/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%3DTILLER: dial tcp [::1]:8080: getsockopt: connection refused

This was a manual install in a lab on CentOS.

Tiller pod is up and running:
kubectl -n kube-system get pods
NAME READY STATUS RESTARTS AGE
tiller-deploy-978853713-7h871 1/1 Running 0 108d

helm version
Client: &version.Version{SemVer:"v2.6.2", GitCommit:"be3ae4ea91b2960be98c07e8f73754e67e87963c", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.6.2", GitCommit:"be3ae4ea91b2960be98c07e8f73754e67e87963c", GitTreeState:"clean"}

@OpusX
Copy link
Author

OpusX commented Feb 6, 2018

Here is my k8s config:

apiVersion: v1
clusters:
- cluster:
    server: http://kubrelab01:8080
  name: default-cluster
contexts:
- context:
    cluster: default-cluster
    user: default-admin
  name: default-context
current-context: default-context
kind: Config
preferences: {}
users: []

kubectl cluster-info
Kubernetes master is running at http://kubrelab01:8080

kubectl config current-context
default-context

@bacongobbler
Copy link
Member

looks like tiller is attempting to connect to the kubernetes API server using ipv6. Because this is a manual install, I'm going to immediately assume that this is due to a misconfigured cluster with ipv6 enabled and not exactly tiller's fault. I know there have been issues in the past with Kubernetes and ipv6. I'd either try to disable ipv6 from within your cluster or otherwise look into that part of your configuration.

@mrene
Copy link

mrene commented Feb 23, 2018

I have the same issue on GKE (1.9.2) and RBAC on and installing tiller in a specific namespace. The pod didn't have anything mounted inside /var/run/secrets - I had to edit the deployment and add automountServiceAccountToken: true which fixed the problem. Might be a new default in k8s 1.9?

@mrene
Copy link

mrene commented Feb 23, 2018

Actually, it's the service account that I created that had automountServiceAccountToken: false set by default.

jkoleszar added a commit to jkoleszar/helm that referenced this issue Mar 29, 2018
Adds automountServiceAccountToken when a serviceAccount is specified.
Fixes helm#3460, fixes helm#3467.
jkoleszar added a commit to jkoleszar/helm that referenced this issue Mar 29, 2018
Adds automountServiceAccountToken when a serviceAccount is specified.
Fixes helm#3460, fixes helm#3467.
jkoleszar added a commit to jkoleszar/helm that referenced this issue Mar 29, 2018
Adds automountServiceAccountToken when a serviceAccount is specified.

Prior to this, tiller falls back to contacting the KUBERNETES_SERVICE on
localhost:8080 rather than respecting the cluster IP in the
KUBERNETES_SERVICE_{HOST,PORT} environment variables.

Fixes helm#3460, fixes helm#3467.
bacongobbler pushed a commit that referenced this issue Apr 4, 2018
Adds automountServiceAccountToken when a serviceAccount is specified.

Prior to this, tiller falls back to contacting the KUBERNETES_SERVICE on
localhost:8080 rather than respecting the cluster IP in the
KUBERNETES_SERVICE_{HOST,PORT} environment variables.

Fixes #3460, fixes #3467.

(cherry picked from commit 1e03f1b)
@bacongobbler
Copy link
Member

Re-opening this one due to having to revert the PR that addressed this issue due to breaking other parts of the codebase (e.g. helm init with no --service-account set)

@bacongobbler bacongobbler reopened this Apr 30, 2018
@geekdave
Copy link

geekdave commented May 1, 2018

Any workarounds for this? Getting the same issue on an AWS cluster I set up with kubeadm. Saw the comment about adding automountServiceAccountToken: true but not sure where this would go?

@hendrikhalkow
Copy link

Getting the same on OpenShift, deployed via openshift-ansible. I want to use IPv6 and I am pretty store that I set it up correctly. Any idea where that localhost:8080 URL comes from?

@bacongobbler
Copy link
Member

workaround:

helm init --service-account default

@hendrikhalkow
Copy link

Doesn't change a thing:

$ rm -rf ~/.helm/ ~/.kube/
$ oc login https://my.domain
Authentication required for https://my.domain.io:443 (openshift)
Username: myusername
Password: 
Login successful.

You have access to the following projects and can switch between them with 'oc project <projectname>':

* default
    glusterfs
    hello
    kube-public
    kube-service-catalog
    kube-system
    logging
    management-infra
    openshift
    openshift-ansible-service-broker
    openshift-infra
    openshift-node
    openshift-template-service-broker
    openshift-web-console

Using project "default".
Welcome! See 'oc help' to get started.
$ kubectl get pods
NAME                       READY     STATUS        RESTARTS   AGE
docker-registry-1-cp2vh    1/1       Running       1          15h
docker-registry-1-g4f46    1/1       Running       1          9h
my-hello-1-q4gnd           1/1       Running       0          21m
registry-console-1-vdqgq   1/1       Running       1          9h
router-1-6djkg             0/1       Pending       0          8h
router-1-njwbk             1/1       Running       1          15h
router-1-r58c8             0/1       Terminating   0          9h
$ helm init
$ helm init --service-account default
Creating /Users/myusername/.helm 
Creating /Users/myusername/.helm/repository 
Creating /Users/myusername/.helm/repository/cache 
Creating /Users/myusername/.helm/repository/local 
Creating /Users/myusername/.helm/plugins 
Creating /Users/myusername/.helm/starters 
Creating /Users/myusername/.helm/cache/archive 
Creating /Users/myusername/.helm/repository/repositories.yaml 
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com 
Adding local repo with URL: http://127.0.0.1:8879/charts 
$HELM_HOME has been configured at /Users/myusername/.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)
Happy Helming!
$ helm ls
Error: Get http://localhost:8080/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%!D(MISSING)TILLER: dial tcp 127.0.0.1:8080: connect: connection refused

@bacongobbler
Copy link
Member

bacongobbler commented May 1, 2018

it won't work if you ran helm init previously. Sorry! I meant that you need to replace helm init with helm init --service-account default. Run kubectl -n kube-system delete deploy tiller-deploy and re-run helm init --service-account default again.

@cookkkie
Copy link

cookkkie commented May 2, 2018

Hi, after doing kubectl -n kube-system delete deploy tiller-deploy, I'm getting Warning: Tiller is already installed in the cluster. from the helm init --service-account default. Deleting the deployment isn't enough to uninstall tiller?

EDIT: I tried helm reset, it gives me Error: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list configmaps in the namespace "kube-system": Unknown user "system:serviceaccount:kube-system:default". So probably just a perms issue on my side.

@bacongobbler
Copy link
Member

bacongobbler commented May 2, 2018

yes. helm init provisions a deployment and a service, so in order to remove that warning you also need to remove the tiller-deploy service.

@innovia
Copy link

innovia commented May 2, 2018

@cookkkie

The service account needs to have a clusterRole: Admin default does not have a cluster admin role

default service account is used at no RBAC clusters.

kubectl delete svc tiller-deploy -n kube-system
kubectl -n kube-system delete deploy tiller-deploy
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller
helm ls # does not return an error

Proof that it's working:

helm upgrade -i --namespace kube-system autoscale -f cluster-autoscaler/my-values.yaml cluster-autoscaler

helm ls
NAME     	REVISION	UPDATED                 	STATUS  	CHART                   	NAMESPACE
autoscale	1       	Wed May  2 17:07:52 2018	DEPLOYED	cluster-autoscaler-0.6.1	kube-system

chekcout my blog post on how to setup tiller per namespace

@bacongobbler
Copy link
Member

The root issue for this has been fixed in helm v2.9.1. Thanks everyone!

@mrene if there's a doc issue where we create a service account with automountServiceAccountToken set to false by default, could you please open a new ticket/PR for that? Thank you so much. :)

@liijuun
Copy link

liijuun commented Aug 16, 2018

@innovia
It's not working for me.

root@neo-1:~/kubernetes/prometheus-operator/helm# helm ls
Error: Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%!D(MISSING)TILLER: dial tcp 10.96.0.1:443: i/o timeout

root@neo-1:~/kubernetes/prometheus-operator/helm# kubectl delete svc tiller-deploy -n kube-system
service "tiller-deploy" deleted
root@neo-1:~/kubernetes/prometheus-operator/helm# kubectl -n kube-system delete deploy tiller-deploy
deployment.extensions "tiller-deploy" deleted
root@neo-1:~/kubernetes/prometheus-operator/helm# kubectl -n kube-system delete serviceaccount tiller
serviceaccount "tiller" deleted
root@neo-1:~/kubernetes/prometheus-operator/helm# kubectl -n kube-system delete clusterrolebinding tiller-cluster-rule
clusterrolebinding.rbac.authorization.k8s.io "tiller-cluster-rule" deleted



root@neo-1:~/kubernetes/prometheus-operator/helm#
root@neo-1:~/kubernetes/prometheus-operator/helm#
root@neo-1:~/kubernetes/prometheus-operator/helm# kubectl create serviceaccount --namespace kube-system tiller
serviceaccount/tiller created
root@neo-1:~/kubernetes/prometheus-operator/helm# kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
clusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-rule created
root@neo-1:~/kubernetes/prometheus-operator/helm# helm init --service-account tiller
$HELM_HOME has been configured at /root/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!
root@neo-1:~/kubernetes/prometheus-operator/helm# helm ls
Error: Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%!D(MISSING)TILLER: dial tcp 10.96.0.1:443: i/o timeout



root@neo-1:~/kubernetes/prometheus-operator/helm# helm version
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}

splisson pushed a commit to splisson/helm that referenced this issue Dec 6, 2018
Adds automountServiceAccountToken when a serviceAccount is specified.

Prior to this, tiller falls back to contacting the KUBERNETES_SERVICE on
localhost:8080 rather than respecting the cluster IP in the
KUBERNETES_SERVICE_{HOST,PORT} environment variables.

Fixes helm#3460, fixes helm#3467.
splisson pushed a commit to splisson/helm that referenced this issue Dec 6, 2018
Adds automountServiceAccountToken when a serviceAccount is specified.

Prior to this, tiller falls back to contacting the KUBERNETES_SERVICE on
localhost:8080 rather than respecting the cluster IP in the
KUBERNETES_SERVICE_{HOST,PORT} environment variables.

Fixes helm#3460, fixes helm#3467.
@summerzhangft
Copy link

kubectl get pods -n kube-system
kubectl delete pod tiller-deploy-bf49955d6-tcchs
helm init --service-account tiller

jianghang8421 pushed a commit to jianghang8421/helm that referenced this issue Feb 17, 2019
Adds automountServiceAccountToken when a serviceAccount is specified.

Prior to this, tiller falls back to contacting the KUBERNETES_SERVICE on
localhost:8080 rather than respecting the cluster IP in the
KUBERNETES_SERVICE_{HOST,PORT} environment variables.

Fixes helm#3460, fixes helm#3467.
@rgylan
Copy link

rgylan commented Mar 24, 2019

Had the same issue: Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%3DTILLER: dial tcp 10.96.0.1:443: i/o timeout

Issuing "helm list" produces above.

In my case flannel is the culprit. I had kubeadm init below:
kubeadm init --apiserver-advertise-address=192.168.137.189 --pod-network-cidr=192.168.1.0/18
But somehow flannel forced the CIDR to be 10.244.0.0/16
You can check by: kubectl edit cm -n kube-system kube-flannel-cfg
So I edited using 192.168.1.0/18
Then: kubectl delete pod -n kube-system -l app=flannel to make sure you have a newly spawned flannel pod which will get the 192.168.1.0/18

Then as per above post to start new tiller pod with correct cluster role:
kubectl delete svc tiller-deploy -n kube-system
kubectl -n kube-system delete deploy tiller-deploy
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller
helm ls # does not return an error

Then: helm list, no more timeout error.

Thanks to this post: https://gravitational.com/blog/kubernetes-flannel-dashboard-bomb/#fn:https-kubernetes

@tunicashashi
Copy link

After doing everything like above I still see this issue with my Kubernetes 1. 14 and helm 2.13:
I have --pod-network-cidr: 172.16.0.0/16
and --service-network-cidr: 172.18.0.0/16
and my master VM apiserver is on : 192.168.10.14

I had created a service account as above mentioned: tiller
My kubectl command provides as below log:
[centos@master-centos oisp-k8s]$ kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-fb8b8dccf-85szb 1/1 Running 0 3d 172.16.0.98 master-centos.novalocal
coredns-fb8b8dccf-w5swd 1/1 Running 0 3d 172.16.0.99 master-centos.novalocal
etcd-master-centos.novalocal 1/1 Running 0 3d 192.168.10.14 master-centos.novalocal
kube-apiserver-master-centos.novalocal 1/1 Running 0 3d 192.168.10.14 master-centos.novalocal
kube-controller-manager-master-centos.novalocal 1/1 Running 0 3d 192.168.10.14 master-centos.novalocal
kube-flannel-ds-amd64-6qh4q 1/1 Running 0 3d 192.168.10.3 node03-centos.novalocal
kube-flannel-ds-amd64-g5m9c 1/1 Running 0 3d 192.168.10.14 master-centos.novalocal
kube-proxy-2g7j8 1/1 Running 0 3d 192.168.10.3 node03-centos.novalocal
kube-proxy-llqdk 1/1 Running 0 3d 192.168.10.14 master-centos.novalocal
kube-scheduler-master-centos.novalocal 1/1 Running 0 3d 192.168.10.14 master-centos.novalocal
tiller-deploy-8458f6c667-xtnv9 1/1 Running 0 3h15m 172.16.1.4 node03-centos.novalocal


$kubectl logs tiller-deploy-8458f6c667-xtnv9 -n kube-system

main] 2019/04/15 12:05:05 Starting Tiller v2.13.1 (tls=false)
[main] 2019/04/15 12:05:05 GRPC listening on :44134
[main] 2019/04/15 12:05:05 Probes listening on :44135
[main] 2019/04/15 12:05:05 Storage driver is ConfigMap
[main] 2019/04/15 12:05:05 Max history per release is 0
[storage] 2019/04/15 12:57:47 listing all releases with filter
[storage/driver] 2019/04/15 12:58:17 list: failed to list: Get https://172.18.0.1:443/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%3DTILLER: dial tcp 172.18.0.1:443: i/o timeout
[storage] 2019/04/15 14:23:45 listing all releases with filter

@damien-roche
Copy link

Considering Helm (3) is moving away from Tiller is it about time we forget it exists? I've had nothing but problems and it seems like a huge security issue to jump through RBAC hoops and give it unnecessary control over the cluster.. to manage releases?

I've started using helm template .., kubectl apply and have no intention of using Tiller ever again. I'd highly recommend those above who have issues to do the same.

@cforce
Copy link

cforce commented Nov 1, 2019

#3480 (comment)

@rrmuchedzi
Copy link

rrmuchedzi commented Mar 30, 2020

First, make sure you have Docker and Kubernetes setup, then run (Replace <YOUR_USER> with your host Username)

mkdir -p ~/.kube
ln -sf /mnt/c/users/<YOUR_USER>/.kube/config ~/.kube/config

Then run,
helm init

@mrvrbabu
Copy link

@rgylan Your solution worked perfectly for me and it saved my day. :)

@AntonOfTheWoods
Copy link

If you are running on something like microk8s or minikube and have used a VPN or something that messes with your iptables then this might be due to network funkiness. Try restarting your host network and kubernetes (e.g, microk8s.stop && microk8s.start).

@pohvak
Copy link

pohvak commented Feb 9, 2021

workaround:

helm init --service-account default

init is not a command for helm

@bacongobbler
Copy link
Member

@pohvak it was for Helm 2 back when I posted that comment in 2018.

@pohvak
Copy link

pohvak commented Feb 9, 2021

@pohvak it was for Helm 2 back when I posted that comment in 2018.

@bacongobbler oh, I see now, sorry

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.