-
Notifications
You must be signed in to change notification settings - Fork 7.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Connection refused error when issuing helm install command #3460
Comments
Here is my k8s config:
kubectl cluster-info kubectl config current-context |
looks like tiller is attempting to connect to the kubernetes API server using ipv6. Because this is a manual install, I'm going to immediately assume that this is due to a misconfigured cluster with ipv6 enabled and not exactly tiller's fault. I know there have been issues in the past with Kubernetes and ipv6. I'd either try to disable ipv6 from within your cluster or otherwise look into that part of your configuration. |
I have the same issue on GKE (1.9.2) and RBAC on and installing tiller in a specific namespace. The pod didn't have anything mounted inside |
Actually, it's the service account that I created that had |
Adds automountServiceAccountToken when a serviceAccount is specified. Prior to this, tiller falls back to contacting the KUBERNETES_SERVICE on localhost:8080 rather than respecting the cluster IP in the KUBERNETES_SERVICE_{HOST,PORT} environment variables. Fixes #3460, fixes #3467. (cherry picked from commit 1e03f1b)
Re-opening this one due to having to revert the PR that addressed this issue due to breaking other parts of the codebase (e.g. |
Any workarounds for this? Getting the same issue on an AWS cluster I set up with |
Getting the same on OpenShift, deployed via openshift-ansible. I want to use IPv6 and I am pretty store that I set it up correctly. Any idea where that localhost:8080 URL comes from? |
workaround:
|
Doesn't change a thing:
|
it won't work if you ran |
Hi, after doing EDIT: I tried |
yes. |
The service account needs to have a default service account is used at no RBAC clusters. kubectl delete svc tiller-deploy -n kube-system
kubectl -n kube-system delete deploy tiller-deploy
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller
helm ls # does not return an error Proof that it's working: helm upgrade -i --namespace kube-system autoscale -f cluster-autoscaler/my-values.yaml cluster-autoscaler
helm ls
NAME REVISION UPDATED STATUS CHART NAMESPACE
autoscale 1 Wed May 2 17:07:52 2018 DEPLOYED cluster-autoscaler-0.6.1 kube-system |
The root issue for this has been fixed in helm v2.9.1. Thanks everyone! @mrene if there's a doc issue where we create a service account with |
@innovia
|
kubectl get pods -n kube-system |
Had the same issue: Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%3DTILLER: dial tcp 10.96.0.1:443: i/o timeout Issuing "helm list" produces above. In my case flannel is the culprit. I had kubeadm init below: Then as per above post to start new tiller pod with correct cluster role: Then: helm list, no more timeout error. Thanks to this post: https://gravitational.com/blog/kubernetes-flannel-dashboard-bomb/#fn:https-kubernetes |
After doing everything like above I still see this issue with my Kubernetes 1. 14 and helm 2.13: I had created a service account as above mentioned: tiller $kubectl logs tiller-deploy-8458f6c667-xtnv9 -n kube-system main] 2019/04/15 12:05:05 Starting Tiller v2.13.1 (tls=false) |
Considering Helm (3) is moving away from Tiller is it about time we forget it exists? I've had nothing but problems and it seems like a huge security issue to jump through RBAC hoops and give it unnecessary control over the cluster.. to manage releases? I've started using |
First, make sure you have Docker and Kubernetes setup, then run (Replace <YOUR_USER> with your host Username) mkdir -p ~/.kube Then run, |
@rgylan Your solution worked perfectly for me and it saved my day. :) |
If you are running on something like |
init is not a command for helm |
@pohvak it was for Helm 2 back when I posted that comment in 2018. |
@bacongobbler oh, I see now, sorry |
When running "helm list" or a "helm install .... " I am getting the following error:
Error: Get http://localhost:8080/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%3DTILLER: dial tcp [::1]:8080: getsockopt: connection refused
This was a manual install in a lab on CentOS.
Tiller pod is up and running:
kubectl -n kube-system get pods
NAME READY STATUS RESTARTS AGE
tiller-deploy-978853713-7h871 1/1 Running 0 108d
helm version
Client: &version.Version{SemVer:"v2.6.2", GitCommit:"be3ae4ea91b2960be98c07e8f73754e67e87963c", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.6.2", GitCommit:"be3ae4ea91b2960be98c07e8f73754e67e87963c", GitTreeState:"clean"}
The text was updated successfully, but these errors were encountered: