-
Notifications
You must be signed in to change notification settings - Fork 7.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Helm error with forwarding ports #2463
Comments
This is almost definitely a Kubernetes configuration issue. You might try asking in the #helm-users channel on Slack, if that's possible for you. |
I'm going to close this ticket due to inactivity, but please re-open if this still needs to be addressed. Thanks! |
me@mypad[k8s/kubeadm]% kubectl version % helm list |
What is returned when you type: |
I'm also hitting this problem on a vagrant-installed, (based on centos/7 boxes) multi-node cluster.
The cluster seems to operate fine (I've been able to install/access services OK). I tried playing with the yaml definition from
Any ideas? |
You can use Are there any errors you see when running |
Hi Matthew,
Yeah unfortunately the "kubectl port-forward" command and also "helm list"
both produce
```Error: forwarding ports: error upgrading connection: unable to upgrade
connection: pod does not exist```
whereas the "get pods" provided above shows that the tiller-deploy pod is
running.
…On Mon, 17 Sep 2018 at 18:00, Matthew Fisher ***@***.***> wrote:
You can use kubectl port-forward to forward port 44134 locally, then use expose
HELM_HOST=:44134 to interact with tiller that way.
Are there any errors you see when running helm list without that
configuration though?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#2463 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ABywLfVoN4lvVVkHpBNd3svuM-z53tCLks5ub8cJgaJpZM4Ne3PL>
.
--
Cloud Native Training in English or French
-Kubernetes, Docker
-Serverless
-Micro-services
-Python, Git
|
Can you post the full output of the commands as well as the output of |
OK, thanks for your time.
Does this give any clues?
Thx
```
kubectl -n kube-system get pod -o wide
NAME READY STATUS RESTARTS AGE
IP NODE
coredns-78fcdf6894-cvwgc 1/1 Running 0 1d
10.244.0.3 master
coredns-78fcdf6894-vr9jw 1/1 Running 0 1d
10.244.0.2 master
etcd-master 1/1 Running 0 1d
10.0.2.15 master
kube-apiserver-master 1/1 Running 0 1d
10.0.2.15 master
kube-controller-manager-master 1/1 Running 0 1d
10.0.2.15 master
kube-flannel-ds-kngs5 1/1 Running 0 1d
10.0.2.15 master
kube-flannel-ds-kr7cc 1/1 Running 0 1d
10.0.2.15 node2
kube-flannel-ds-n7rr6 1/1 Running 2 1d
10.0.2.15 node3
kube-flannel-ds-pkzmw 1/1 Running 0 1d
10.0.2.15 node1
kube-flannel-ds-prrh5 1/1 Running 0 1d
10.0.2.15 node4
kube-proxy-94tmj 1/1 Running 0 1d
10.0.2.15 node3
kube-proxy-jqgfb 1/1 Running 0 1d
10.0.2.15 node2
kube-proxy-kgm2r 1/1 Running 0 1d
10.0.2.15 master
kube-proxy-s9jfd 1/1 Running 0 1d
10.0.2.15 node1
kube-proxy-xgxdv 1/1 Running 0 1d
10.0.2.15 node4
kube-scheduler-master 1/1 Running 0 1d
10.0.2.15 master
tiller-deploy-64c9d747bd-csphl 1/1 Running 0 1h
10.244.3.15 node3
```
```
kubectl -n kube-system port-forward $(kubectl -n kube-system get pod -l
app=helm -o jsonpath='{.items[0].metadata.name}') 44134
error: error upgrading connection: unable to upgrade connection: pod does
not exist
```
```
kubectl --v 8 -n kube-system port-forward $(kubectl -n kube-system get
pod -l app=helm -o jsonpath='{.items[0].metadata.name}') 44134
I0917 18:58:14.464676 8008 loader.go:357] Config loaded from file
/home/mjb/.kube/config
I0917 18:58:14.470440 8008 round_trippers.go:383] GET
https://192.168.33.20:6443/api/v1/namespaces/kube-system/pods/tiller-deploy-64c9d747bd-csphl
I0917 18:58:14.470466 8008 round_trippers.go:390] Request Headers:
I0917 18:58:14.470473 8008 round_trippers.go:393] Accept:
application/json, */*
I0917 18:58:14.470479 8008 round_trippers.go:393] User-Agent:
kubectl/v1.10.1 (linux/amd64) kubernetes/d4ab475
I0917 18:58:14.482079 8008 round_trippers.go:408] Response Status: 200
OK in 11 milliseconds
I0917 18:58:14.482131 8008 round_trippers.go:411] Response Headers:
I0917 18:58:14.482148 8008 round_trippers.go:414] Date: Mon, 17 Sep
2018 16:58:14 GMT
I0917 18:58:14.482157 8008 round_trippers.go:414] Content-Type:
application/json
I0917 18:58:14.482162 8008 round_trippers.go:414] Content-Length:
3065
I0917 18:58:14.482249 8008 request.go:874] Response Body:
{"kind":"Pod","apiVersion":"v1","metadata":{"name":"tiller-deploy-64c9d747bd-csphl","generateName":"tiller-deploy-64c9d747bd-","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/pods/tiller-deploy-64c9d747bd-csphl","uid":"2fb8d68a-ba8e-11e8-a1be-525400c9c704","resourceVersion":"140256","creationTimestamp":"2018-09-17T15:27:28Z","labels":{"app":"helm","name":"tiller","pod-template-hash":"2075830368"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"tiller-deploy-64c9d747bd","uid":"2fb5973d-ba8e-11e8-a1be-525400c9c704","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"default-token-nzmpw","secret":{"secretName":"default-token-nzmpw","defaultMode":420}}],"containers":[{"name":"tiller","image":"
gcr.io/kubernetes-helm/tiller:v2.10.0","ports":[{"name":"tiller","containerPort":44134,"protocol":"TCP"},{"name":"http","containerPort":44135,"protocol":"TCP"}],"env":[{"name":"TILLER_NAMESPACE","value":"kube-system"},{"name":"TILLER_HISTORY_MAX","value":"0"}
[truncated 2041 chars]
I0917 18:58:14.531377 8008 round_trippers.go:383] GET
https://192.168.33.20:6443/api/v1/namespaces/kube-system/pods/tiller-deploy-64c9d747bd-csphl
I0917 18:58:14.531413 8008 round_trippers.go:390] Request Headers:
I0917 18:58:14.531421 8008 round_trippers.go:393] Accept:
application/json, */*
I0917 18:58:14.531427 8008 round_trippers.go:393] User-Agent:
kubectl/v1.10.1 (linux/amd64) kubernetes/d4ab475
I0917 18:58:14.534341 8008 round_trippers.go:408] Response Status: 200
OK in 2 milliseconds
I0917 18:58:14.534370 8008 round_trippers.go:411] Response Headers:
I0917 18:58:14.534385 8008 round_trippers.go:414] Content-Type:
application/json
I0917 18:58:14.534394 8008 round_trippers.go:414] Content-Length:
3065
I0917 18:58:14.534401 8008 round_trippers.go:414] Date: Mon, 17 Sep
2018 16:58:14 GMT
I0917 18:58:14.534502 8008 request.go:874] Response Body:
{"kind":"Pod","apiVersion":"v1","metadata":{"name":"tiller-deploy-64c9d747bd-csphl","generateName":"tiller-deploy-64c9d747bd-","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/pods/tiller-deploy-64c9d747bd-csphl","uid":"2fb8d68a-ba8e-11e8-a1be-525400c9c704","resourceVersion":"140256","creationTimestamp":"2018-09-17T15:27:28Z","labels":{"app":"helm","name":"tiller","pod-template-hash":"2075830368"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"tiller-deploy-64c9d747bd","uid":"2fb5973d-ba8e-11e8-a1be-525400c9c704","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"default-token-nzmpw","secret":{"secretName":"default-token-nzmpw","defaultMode":420}}],"containers":[{"name":"tiller","image":"
gcr.io/kubernetes-helm/tiller:v2.10.0","ports":[{"name":"tiller","containerPort":44134,"protocol":"TCP"},{"name":"http","containerPort":44135,"protocol":"TCP"}],"env":[{"name":"TILLER_NAMESPACE","value":"kube-system"},{"name":"TILLER_HISTORY_MAX","value":"0"}
[truncated 2041 chars]
I0917 18:58:14.535481 8008 round_trippers.go:383] POST
https://192.168.33.20:6443/api/v1/namespaces/kube-system/pods/tiller-deploy-64c9d747bd-csphl/portforward
I0917 18:58:14.535526 8008 round_trippers.go:390] Request Headers:
I0917 18:58:14.535540 8008 round_trippers.go:393]
X-Stream-Protocol-Version: portforward.k8s.io
I0917 18:58:14.535554 8008 round_trippers.go:393] User-Agent:
kubectl/v1.10.1 (linux/amd64) kubernetes/d4ab475
I0917 18:58:14.558269 8008 round_trippers.go:408] Response Status: 404
Not Found in 22 milliseconds
I0917 18:58:14.558304 8008 round_trippers.go:411] Response Headers:
I0917 18:58:14.558321 8008 round_trippers.go:414] Date: Mon, 17 Sep
2018 16:58:14 GMT
I0917 18:58:14.558327 8008 round_trippers.go:414] Content-Length: 18
I0917 18:58:14.558332 8008 round_trippers.go:414] Content-Type:
text/plain; charset=utf-8
F0917 18:58:14.558474 8008 helpers.go:119] error: error upgrading
connection: unable to upgrade connection: pod does not exist
```
…On Mon, 17 Sep 2018 at 18:33, Matthew Fisher ***@***.***> wrote:
Can you post the full output of the commands as well as the output of kubectl
-n kube-system get pods for me?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#2463 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ABywLSYCdUrr4lkOa3PgxSlvUdi_Nx-Wks5ub87igaJpZM4Ne3PL>
.
--
Cloud Native Training in English or French
-Kubernetes, Docker
-Serverless
-Micro-services
-Python, Git
|
Did anyone found a solution to this |
Exactly same issue as @mjbright mentioned, here. |
More details about my environment: kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-fb8b8dccf-s2qkz 1/1 Running 11 111m
kube-system coredns-fb8b8dccf-vlslg 1/1 Running 9 111m
kube-system etcd-master1 1/1 Running 6 110m
kube-system kube-apiserver-master1 1/1 Running 6 110m
kube-system kube-controller-manager-master1 1/1 Running 6 110m
kube-system kube-flannel-ds-amd64-gjrgl 1/1 Running 6 108m
kube-system kube-flannel-ds-amd64-k9gz9 1/1 Running 6 108m
kube-system kube-flannel-ds-amd64-skcht 1/1 Running 6 108m
kube-system kube-proxy-ctx9x 1/1 Running 6 111m
kube-system kube-proxy-wnmzv 1/1 Running 6 111m
kube-system kube-proxy-zvwfd 1/1 Running 6 111m
kube-system kube-scheduler-master1 1/1 Running 6 110m
kube-system tiller-deploy-8458f6c667-s4hv9 1/1 Running 0 6m45s kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 111m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 111m
kube-system tiller-deploy ClusterIP 10.106.7.137 <none> 44134/TCP 107m kubectl describe pod -n kube-system tiller-deploy
Name: tiller-deploy-8458f6c667-s4hv9
Namespace: kube-system
Priority: 0
PriorityClassName: <none>
Node: node2/10.0.2.15
Start Time: Thu, 11 Apr 2019 13:36:28 -0300
Labels: app=helm
name=tiller
pod-template-hash=8458f6c667
Annotations: <none>
Status: Running
IP: 10.244.2.17
Controlled By: ReplicaSet/tiller-deploy-8458f6c667
Containers:
tiller:
Container ID: docker://aac77761a0f278a4a89f03a266961e45b191144172224827fc0a13885aeb71db
Image: gcr.io/kubernetes-helm/tiller:v2.13.1
Image ID: docker-pullable://gcr.io/kubernetes-helm/tiller@sha256:d52b34a9f9aeec1cf74155ca51fcbb5d872a705914565c782be4531790a4ee0e
Ports: 44134/TCP, 44135/TCP
Host Ports: 0/TCP, 0/TCP
State: Running
Started: Thu, 11 Apr 2019 13:36:29 -0300
Ready: True
Restart Count: 0
Liveness: http-get http://:44135/liveness delay=1s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:44135/readiness delay=1s timeout=1s period=10s #success=1 #failure=3
Environment:
TILLER_NAMESPACE: kube-system
TILLER_HISTORY_MAX: 0
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from tiller-token-9f299 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
tiller-token-9f299:
Type: Secret (a volume populated by a Secret)
SecretName: tiller-token-9f299
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 8m13s default-scheduler Successfully assigned kube-system/tiller-deploy-8458f6c667-s4hv9 to node2
Normal Pulled 8m12s kubelet, node2 Container image "gcr.io/kubernetes-helm/tiller:v2.13.1" already present on machine
Normal Created 8m12s kubelet, node2 Created container tiller
Normal Started 8m12s kubelet, node2 Started container tiller |
I'm having exactly the same problem. Any solution? When I do the following request: I get the following response:
which ultimately results in my original command
|
I have exactly this. Did you find out how to fix it? Was it an issue with DNS resolving from within the cluster? |
I was able to resolve this by restarting the node where tiller was installed and then initializing again.
|
In all cases that we know of, the problem is related to Kubernetes itself, not Helm or Tiller. That is why we have been closing these out. The answer I gave over two years ago still applies: Your best bet is to go investigate Kubernetes. AFAIK, there is no single solution to the problem on the Kubernetes side. I have seen it happen due to control plane misconfiguration, API server issues, node issues, DNS issues, and overly aggressive timeouts. If there were only one cause/solution, we could simply document it. But in this case, there's not much we can do but say "go look at Kubernetes" In other issues, we'e suggested starting out with tests against If anyone feels like updating the V2 FAQ, there is already a fairly generic section about proxy failures. https://github.com/helm/helm/blob/dev-v2/docs/install_faq.md This could augment that. |
Thank you for the detailed response! I started looking at kubernetes based on responses to the other issues that were raised. I believe my issue was with the kubelet not having contact with the cluster. I have seen this before when setting up new clusters and a restart of the node seems to resolve it. I updated my comment to reflect what worked for me, hopefully it will help others in the future! |
Hello,
I have new clear installation of K8s and then I installed Helm. If I try to install whatever (i.e. mysql) I get this error message:
Error: forwarding ports: error upgrading connection: unable to upgrade connection: pod does not exist
Next I tried to set port-forwarding but I still get the same error:
K8s and Helm is running on Ubuntu (v16.04) in my VirtualBox. There I have 2 network interfaces. I'm not sure if problem should be there. My networks:
It is almost the same problem like issue 1770.
The text was updated successfully, but these errors were encountered: