Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helm error with forwarding ports #2463

Closed
waldauf opened this issue May 18, 2017 · 17 comments
Closed

Helm error with forwarding ports #2463

waldauf opened this issue May 18, 2017 · 17 comments

Comments

@waldauf
Copy link

waldauf commented May 18, 2017

Hello,

I have new clear installation of K8s and then I installed Helm. If I try to install whatever (i.e. mysql) I get this error message:
Error: forwarding ports: error upgrading connection: unable to upgrade connection: pod does not exist

# kubectl -n kube-system get pods                                                                                                 
NAME                             READY     STATUS    RESTARTS   AGE
etcd-master                      1/1       Running   2          3h
kube-apiserver-master            1/1       Running   3          3h
kube-controller-manager-master   1/1       Running   3          3h
kube-dns-3913472980-5wj0x        3/3       Running   6          3h
kube-proxy-tmh3k                 1/1       Running   2          3h
kube-proxy-vssfr                 1/1       Running   0          3h
kube-scheduler-master            1/1       Running   3          3h
tiller-deploy-1491950541-5crrg   1/1       Running   0          2h

Next I tried to set port-forwarding but I still get the same error:

# kubectl -n kube-system port-forward $(kubectl -n kube-system get pod -l app=helm -o jsonpath='{.items[0].metadata.name}') 44134 
error: error upgrading connection: unable to upgrade connection: pod does not exist

K8s and Helm is running on Ubuntu (v16.04) in my VirtualBox. There I have 2 network interfaces. I'm not sure if problem should be there. My networks:

  • enp0s3 - NAT
  • enp0s8 - host only adapter (for connecting to Node01)

It is almost the same problem like issue 1770.

@technosophos
Copy link
Member

This is almost definitely a Kubernetes configuration issue. You might try asking in the #helm-users channel on Slack, if that's possible for you.

@bacongobbler
Copy link
Member

I'm going to close this ticket due to inactivity, but please re-open if this still needs to be addressed. Thanks!

@longwuyuan
Copy link

me@mypad[k8s/kubeadm]% kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.1", GitCommit:"d4ab47518836c750f9949b9e0d387f20fb92260b", GitTreeState:"clean", BuildDate:"2018-04-12T14:26:04Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.5", GitCommit:"32ac1c9073b132b8ba18aa830f46b77dcceb0723", GitTreeState:"clean", BuildDate:"2018-06-21T11:34:22Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

% helm list
Error: forwarding ports: error upgrading connection: unable to upgrade connection: pod does not exist
me@mypad[k8s/kubeadm]%

@waldauf
Copy link
Author

waldauf commented Jun 26, 2018

What is returned when you type: kubectl --namespace kube-system get pod -o wide?

@mjbright
Copy link

I'm also hitting this problem on a vagrant-installed, (based on centos/7 boxes) multi-node cluster.

> kubectl --namespace kube-system get pod -o wide
NAME                             READY     STATUS    RESTARTS   AGE       IP            NODE
coredns-78fcdf6894-cvwgc         1/1       Running   0          1d        10.244.0.3    master
coredns-78fcdf6894-vr9jw         1/1       Running   0          1d        10.244.0.2    master
etcd-master                      1/1       Running   0          1d        10.0.2.15     master
kube-apiserver-master            1/1       Running   0          1d        10.0.2.15     master
kube-controller-manager-master   1/1       Running   0          1d        10.0.2.15     master
kube-flannel-ds-kngs5            1/1       Running   0          1d        10.0.2.15     master
kube-flannel-ds-kr7cc            1/1       Running   0          23h       10.0.2.15     node2
kube-flannel-ds-n7rr6            1/1       Running   2          23h       10.0.2.15     node3
kube-flannel-ds-pkzmw            1/1       Running   0          23h       10.0.2.15     node1
kube-flannel-ds-prrh5            1/1       Running   0          23h       10.0.2.15     node4
kube-proxy-94tmj                 1/1       Running   0          23h       10.0.2.15     node3
kube-proxy-jqgfb                 1/1       Running   0          23h       10.0.2.15     node2
kube-proxy-kgm2r                 1/1       Running   0          1d        10.0.2.15     master
kube-proxy-s9jfd                 1/1       Running   0          23h       10.0.2.15     node1
kube-proxy-xgxdv                 1/1       Running   0          23h       10.0.2.15     node4
kube-scheduler-master            1/1       Running   0          1d        10.0.2.15     master
tiller-deploy-64c9d747bd-csphl   1/1       Running   0          8m        10.244.3.15   node3

The cluster seems to operate fine (I've been able to install/access services OK).
I installed tiller using helm init using the latest 2.10.0 binary.

I tried playing with the yaml definition from helm init --output yaml to see if I could "expose" the port somehow (replacing "tiller" by "44134" as targetPort for example) but without success (not really sure what I need to do). The service section of that yaml looks like this:

---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  ports:
  - name: tiller
    port: 44134
    targetPort: tiller
  selector:
    app: helm
    name: tiller
  type: ClusterIP
status:
  loadBalancer: {}

Any ideas?

@bacongobbler
Copy link
Member

You can use kubectl port-forward to forward port 44134 locally, then use expose HELM_HOST=:44134 to interact with tiller that way.

Are there any errors you see when running helm list without that configuration though?

@mjbright
Copy link

mjbright commented Sep 17, 2018 via email

@bacongobbler
Copy link
Member

Can you post the full output of the commands as well as the output of kubectl -n kube-system get pods for me?

@mjbright
Copy link

mjbright commented Sep 17, 2018 via email

@mouhsinelonly
Copy link

Did anyone found a solution to this

@brunowego
Copy link

Exactly same issue as @mjbright mentioned, here.

@brunowego
Copy link

More details about my environment:

kubectl get pods --all-namespaces
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE
kube-system   coredns-fb8b8dccf-s2qkz           1/1     Running   11         111m
kube-system   coredns-fb8b8dccf-vlslg           1/1     Running   9          111m
kube-system   etcd-master1                      1/1     Running   6          110m
kube-system   kube-apiserver-master1            1/1     Running   6          110m
kube-system   kube-controller-manager-master1   1/1     Running   6          110m
kube-system   kube-flannel-ds-amd64-gjrgl       1/1     Running   6          108m
kube-system   kube-flannel-ds-amd64-k9gz9       1/1     Running   6          108m
kube-system   kube-flannel-ds-amd64-skcht       1/1     Running   6          108m
kube-system   kube-proxy-ctx9x                  1/1     Running   6          111m
kube-system   kube-proxy-wnmzv                  1/1     Running   6          111m
kube-system   kube-proxy-zvwfd                  1/1     Running   6          111m
kube-system   kube-scheduler-master1            1/1     Running   6          110m
kube-system   tiller-deploy-8458f6c667-s4hv9    1/1     Running   0          6m45s
kubectl get services --all-namespaces
NAMESPACE     NAME            TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                  AGE
default       kubernetes      ClusterIP   10.96.0.1      <none>        443/TCP                  111m
kube-system   kube-dns        ClusterIP   10.96.0.10     <none>        53/UDP,53/TCP,9153/TCP   111m
kube-system   tiller-deploy   ClusterIP   10.106.7.137   <none>        44134/TCP                107m
kubectl describe pod -n kube-system tiller-deploy
Name:               tiller-deploy-8458f6c667-s4hv9
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               node2/10.0.2.15
Start Time:         Thu, 11 Apr 2019 13:36:28 -0300
Labels:             app=helm
                    name=tiller
                    pod-template-hash=8458f6c667
Annotations:        <none>
Status:             Running
IP:                 10.244.2.17
Controlled By:      ReplicaSet/tiller-deploy-8458f6c667
Containers:
  tiller:
    Container ID:   docker://aac77761a0f278a4a89f03a266961e45b191144172224827fc0a13885aeb71db
    Image:          gcr.io/kubernetes-helm/tiller:v2.13.1
    Image ID:       docker-pullable://gcr.io/kubernetes-helm/tiller@sha256:d52b34a9f9aeec1cf74155ca51fcbb5d872a705914565c782be4531790a4ee0e
    Ports:          44134/TCP, 44135/TCP
    Host Ports:     0/TCP, 0/TCP
    State:          Running
      Started:      Thu, 11 Apr 2019 13:36:29 -0300
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:44135/liveness delay=1s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:44135/readiness delay=1s timeout=1s period=10s #success=1 #failure=3
    Environment:
      TILLER_NAMESPACE:    kube-system
      TILLER_HISTORY_MAX:  0
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from tiller-token-9f299 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  tiller-token-9f299:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  tiller-token-9f299
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  8m13s  default-scheduler  Successfully assigned kube-system/tiller-deploy-8458f6c667-s4hv9 to node2
  Normal  Pulled     8m12s  kubelet, node2     Container image "gcr.io/kubernetes-helm/tiller:v2.13.1" already present on machine
  Normal  Created    8m12s  kubelet, node2     Created container tiller
  Normal  Started    8m12s  kubelet, node2     Started container tiller

@jarvisuser90
Copy link

I'm having exactly the same problem. Any solution?

When I do the following request:
GET https://192.168.0.10:6443/api/v1/namespaces/kube-system/pods/tiller-deploy-59988697b6-j47w7/portforward

I get the following response:

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "Upgrade request required",
  "reason": "BadRequest",
  "code": 400
}

which ultimately results in my original command kubectl -n kube-system port-forward $(kubectl -n kube-system get pod -l app=helm -o jsonpath='{.items[0].metadata.name}') 44134 producing this error :

error: error upgrading connection: unable to upgrade connection: pod does not exist

@Tails
Copy link

Tails commented Nov 6, 2019

I'm having exactly the same problem. Any solution?

When I do the following request:
GET https://192.168.0.10:6443/api/v1/namespaces/kube-system/pods/tiller-deploy-59988697b6-j47w7/portforward

I get the following response:

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "Upgrade request required",
  "reason": "BadRequest",
  "code": 400
}

which ultimately results in my original command kubectl -n kube-system port-forward $(kubectl -n kube-system get pod -l app=helm -o jsonpath='{.items[0].metadata.name}') 44134 producing this error :

error: error upgrading connection: unable to upgrade connection: pod does not exist

I have exactly this. Did you find out how to fix it? Was it an issue with DNS resolving from within the cluster?

@ghost
Copy link

ghost commented Nov 26, 2019

I was able to resolve this by restarting the node where tiller was installed and then initializing again.

$ helm init --service-account tiller --wait
$HELM_HOME has been configured at C:\Users\arontx\home\.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)

$ helm version
Client: &version.Version{SemVer:"v2.16.0", GitCommit:"e13bc94621d4ef666270cfbe734aaabf342a49bb", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.16.0", GitCommit:"e13bc94621d4ef666270cfbe734aaabf342a49bb", GitTreeState:"clean"}

@technosophos
Copy link
Member

In all cases that we know of, the problem is related to Kubernetes itself, not Helm or Tiller. That is why we have been closing these out. The answer I gave over two years ago still applies: Your best bet is to go investigate Kubernetes.

AFAIK, there is no single solution to the problem on the Kubernetes side. I have seen it happen due to control plane misconfiguration, API server issues, node issues, DNS issues, and overly aggressive timeouts. If there were only one cause/solution, we could simply document it. But in this case, there's not much we can do but say "go look at Kubernetes"

In other issues, we'e suggested starting out with tests against kubectl attach/proxy/port-forward. That advice still holds true. If those commands don't work, Helm won't work either.

If anyone feels like updating the V2 FAQ, there is already a fairly generic section about proxy failures. https://github.com/helm/helm/blob/dev-v2/docs/install_faq.md This could augment that.

@ghost
Copy link

ghost commented Nov 26, 2019

Thank you for the detailed response! I started looking at kubernetes based on responses to the other issues that were raised. I believe my issue was with the kubelet not having contact with the cluster. I have seen this before when setting up new clusters and a restart of the node seems to resolve it. I updated my comment to reflect what worked for me, hopefully it will help others in the future!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

9 participants