Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[HELP] 1 node(s) didn't have free ports for the requested pod ports #104

Closed
harshavardhanc opened this issue Sep 9, 2019 · 18 comments
Closed
Assignees
Labels

Comments

@harshavardhanc
Copy link

@harshavardhanc harshavardhanc commented Sep 9, 2019

I'm trying to install istio in k3d cluster, but one of istio component(service load balancer) is failing to start with below error.

Warning FailedScheduling 42s (x6 over 2m59s) default-scheduler 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.

NAME READY STATUS RESTARTS AGE
grafana-6fb9f8c5c7-mr7vb 1/1 Running 0 6m24s
istio-citadel-5cf47dbf7c-jxc4w 1/1 Running 0 6m24s
istio-galley-7898b587db-8jrpq 1/1 Running 0 6m25s
istio-ingressgateway-7c6f8fd795-wl6fn 1/1 Running 0 6m24s
istio-init-crd-10-8qh2j 0/1 Completed 0 26m
istio-init-crd-11-j7glh 0/1 Completed 0 26m
istio-init-crd-12-gvsg6 0/1 Completed 0 26m
istio-nodeagent-clvkf 1/1 Running 0 6m25s
istio-pilot-5c4b6f576b-2b5zf 2/2 Running 0 6m24s
istio-policy-769664fcf7-hj6bn 2/2 Running 3 6m24s
istio-sidecar-injector-677bd5ccc5-wj9zb 1/1 Running 0 6m24s
istio-telemetry-577c6f5b8c-j9dxn 2/2 Running 3 6m24s
istio-tracing-5d8f57c8ff-t7mm4 1/1 Running 0 6m24s
kiali-7d749f9dcb-w7qxr 1/1 Running 0 6m24s
prometheus-776fdf7479-gznbs 1/1 Running 0 6m24s
svclb-istio-ingressgateway-4znth 0/9 Pending 0 6m25s

Please help me in fixing this issue.

@iwilltry42

This comment has been minimized.

Copy link
Collaborator

@iwilltry42 iwilltry42 commented Sep 9, 2019

Hey there, thanks for filing this issue.
Can you paste the full output of kubectl describe for the failing pod/deployment?

@harshavardhanc

This comment has been minimized.

Copy link
Author

@harshavardhanc harshavardhanc commented Sep 9, 2019

Hey @iwilltry42
Here is the output of the failing pod.

~ k describe pods svclb-istio-ingressgateway-92p9s -n istio-system
Name: svclb-istio-ingressgateway-92p9s
Namespace: istio-system
Priority: 0
Node:
Labels: app=svclb-istio-ingressgateway
controller-revision-hash=597bd7b896
pod-template-generation=1
svccontroller.k3s.cattle.io/svcname=istio-ingressgateway
Annotations:
Status: Pending
IP:
Controlled By: DaemonSet/svclb-istio-ingressgateway
Containers:
lb-port-15020:
Image: rancher/klipper-lb:v0.1.1
Port: 15020/TCP
Host Port: 15020/TCP
Environment:
SRC_PORT: 15020
DEST_PROTO: TCP
DEST_PORT: 15020
DEST_IP: 10.43.159.196
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-f5w67 (ro)
lb-port-80:
Image: rancher/klipper-lb:v0.1.1
Port: 80/TCP
Host Port: 80/TCP
Environment:
SRC_PORT: 80
DEST_PROTO: TCP
DEST_PORT: 80
DEST_IP: 10.43.159.196
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-f5w67 (ro)
lb-port-443:
Image: rancher/klipper-lb:v0.1.1
Port: 443/TCP
Host Port: 443/TCP
Environment:
SRC_PORT: 443
DEST_PROTO: TCP
DEST_PORT: 443
DEST_IP: 10.43.159.196
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-f5w67 (ro)
lb-port-31400:
Image: rancher/klipper-lb:v0.1.1
Port: 31400/TCP
Host Port: 31400/TCP
Environment:
SRC_PORT: 31400
DEST_PROTO: TCP
DEST_PORT: 31400
DEST_IP: 10.43.159.196
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-f5w67 (ro)
lb-port-15029:
Image: rancher/klipper-lb:v0.1.1
Port: 15029/TCP
Host Port: 15029/TCP
Environment:
SRC_PORT: 15029
DEST_PROTO: TCP
DEST_PORT: 15029
DEST_IP: 10.43.159.196
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-f5w67 (ro)
lb-port-15030:
Image: rancher/klipper-lb:v0.1.1
Port: 15030/TCP
Host Port: 15030/TCP
Environment:
SRC_PORT: 15030
DEST_PROTO: TCP
DEST_PORT: 15030
DEST_IP: 10.43.159.196
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-f5w67 (ro)
lb-port-15031:
Image: rancher/klipper-lb:v0.1.1
Port: 15031/TCP
Host Port: 15031/TCP
Environment:
SRC_PORT: 15031
DEST_PROTO: TCP
DEST_PORT: 15031
DEST_IP: 10.43.159.196
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-f5w67 (ro)
lb-port-15032:
Image: rancher/klipper-lb:v0.1.1
Port: 15032/TCP
Host Port: 15032/TCP
Environment:
SRC_PORT: 15032
DEST_PROTO: TCP
DEST_PORT: 15032
DEST_IP: 10.43.159.196
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-f5w67 (ro)
lb-port-15443:
Image: rancher/klipper-lb:v0.1.1
Port: 15443/TCP
Host Port: 15443/TCP
Environment:
SRC_PORT: 15443
DEST_PROTO: TCP
DEST_PORT: 15443
DEST_IP: 10.43.159.196
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-f5w67 (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
default-token-f5w67:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-f5w67
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/pid-pressure:NoSchedule
node.kubernetes.io/unreachable:NoExecute
node.kubernetes.io/unschedulable:NoSchedule
Events:
Type Reason Age From Message


Warning FailedScheduling 82s (x7 over 5m18s) default-scheduler 0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.

@iwilltry42

This comment has been minimized.

Copy link
Collaborator

@iwilltry42 iwilltry42 commented Sep 11, 2019

Alright, so it seems like one of the Host Port's is already blocked by another pod.
Anyway, this is not a component of istio but an automatic deployment coming from k3s.

@iwilltry42

This comment has been minimized.

Copy link
Collaborator

@iwilltry42 iwilltry42 commented Sep 11, 2019

Can you provide more details on
a) how you created the cluster (k3d command)
b) how you deployed istio
please? Just so I can replicate it 👍

@iwilltry42

This comment has been minimized.

Copy link
Collaborator

@iwilltry42 iwilltry42 commented Sep 19, 2019

Any news on this @harshavardhanc ?

@tony-kerz

This comment has been minimized.

Copy link

@tony-kerz tony-kerz commented Sep 25, 2019

experiencing same issue:
mac-os: 10.4.6
docker-desktop: 2.1.0.3

wget -q -O - https://raw.githubusercontent.com/rancher/k3d/master/install.sh | bash
export KUBECONFIG=$(k3d get-kubeconfig)
curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.3.0 sh -
cd istio-1.3.0/
export PATH=$PWD/bin:$PATH
for i in install/kubernetes/helm/istio-init/files/crd*yaml; do kubectl apply -f $i; done
kubectl apply -f install/kubernetes/istio-demo-auth.yaml
bash-4.4$ kubectl get po -n istio-system
NAME                                      READY   STATUS      RESTARTS   AGE
grafana-6fc987bd95-pvg9j                  1/1     Running     1          6h57m
istio-citadel-679b7c9b5b-rmqt6            1/1     Running     1          6h57m
istio-cleanup-secrets-1.3.0-wwnfr         0/1     Completed   0          6h57m
istio-egressgateway-5db67796d5-msz5n      1/1     Running     1          6h57m
istio-galley-7ff97f98b5-n5zng             1/1     Running     1          6h57m
istio-grafana-post-install-1.3.0-mfbnm    0/1     Completed   0          6h57m
istio-ingressgateway-859bb7b4-24l9p       1/1     Running     1          6h57m
istio-pilot-9b9f7f5c8-99mj9               2/2     Running     2          6h57m
istio-policy-754cbf67fb-6x9dl             2/2     Running     7          6h57m
istio-security-post-install-1.3.0-7bh9n   0/1     Completed   0          6h57m
istio-sidecar-injector-68f4668959-274mv   1/1     Running     1          6h57m
istio-telemetry-7cf8dcfd54-tnnbq          2/2     Running     8          6h57m
istio-tracing-669fd4b9f8-gsqm5            1/1     Running     1          6h57m
kiali-94f8cbd99-gfgzl                     1/1     Running     1          6h57m
prometheus-776fdf7479-kv95j               1/1     Running     1          6h57m
svclb-istio-ingressgateway-bkpw8          0/9     Pending     0          6h57m
bash-4.4$ kubectl describe pod svclb-istio-ingressgateway-bkpw8 -n istio-system
Name:               svclb-istio-ingressgateway-bkpw8
Namespace:          istio-system
Priority:           0
PriorityClassName:  <none>
Node:               <none>
Labels:             app=svclb-istio-ingressgateway
                    controller-revision-hash=688bbd58b
                    pod-template-generation=1
                    svccontroller.k3s.cattle.io/svcname=istio-ingressgateway
Annotations:        <none>
Status:             Pending
IP:
Controlled By:      DaemonSet/svclb-istio-ingressgateway
Containers:
  lb-port-15020:
    Image:      rancher/klipper-lb:v0.1.1
    Port:       15020/TCP
    Host Port:  15020/TCP
    Environment:
      SRC_PORT:    15020
      DEST_PROTO:  TCP
      DEST_PORT:   15020
      DEST_IP:     10.43.69.4
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-z58mp (ro)
  lb-port-80:
    Image:      rancher/klipper-lb:v0.1.1
    Port:       80/TCP
    Host Port:  80/TCP
    Environment:
      SRC_PORT:    80
      DEST_PROTO:  TCP
      DEST_PORT:   80
      DEST_IP:     10.43.69.4
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-z58mp (ro)
  lb-port-443:
    Image:      rancher/klipper-lb:v0.1.1
    Port:       443/TCP
    Host Port:  443/TCP
    Environment:
      SRC_PORT:    443
      DEST_PROTO:  TCP
      DEST_PORT:   443
      DEST_IP:     10.43.69.4
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-z58mp (ro)
  lb-port-31400:
    Image:      rancher/klipper-lb:v0.1.1
    Port:       31400/TCP
    Host Port:  31400/TCP
    Environment:
      SRC_PORT:    31400
      DEST_PROTO:  TCP
      DEST_PORT:   31400
      DEST_IP:     10.43.69.4
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-z58mp (ro)
  lb-port-15029:
    Image:      rancher/klipper-lb:v0.1.1
    Port:       15029/TCP
    Host Port:  15029/TCP
    Environment:
      SRC_PORT:    15029
      DEST_PROTO:  TCP
      DEST_PORT:   15029
      DEST_IP:     10.43.69.4
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-z58mp (ro)
  lb-port-15030:
    Image:      rancher/klipper-lb:v0.1.1
    Port:       15030/TCP
    Host Port:  15030/TCP
    Environment:
      SRC_PORT:    15030
      DEST_PROTO:  TCP
      DEST_PORT:   15030
      DEST_IP:     10.43.69.4
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-z58mp (ro)
  lb-port-15031:
    Image:      rancher/klipper-lb:v0.1.1
    Port:       15031/TCP
    Host Port:  15031/TCP
    Environment:
      SRC_PORT:    15031
      DEST_PROTO:  TCP
      DEST_PORT:   15031
      DEST_IP:     10.43.69.4
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-z58mp (ro)
  lb-port-15032:
    Image:      rancher/klipper-lb:v0.1.1
    Port:       15032/TCP
    Host Port:  15032/TCP
    Environment:
      SRC_PORT:    15032
      DEST_PROTO:  TCP
      DEST_PORT:   15032
      DEST_IP:     10.43.69.4
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-z58mp (ro)
  lb-port-15443:
    Image:      rancher/klipper-lb:v0.1.1
    Port:       15443/TCP
    Host Port:  15443/TCP
    Environment:
      SRC_PORT:    15443
      DEST_PROTO:  TCP
      DEST_PORT:   15443
      DEST_IP:     10.43.69.4
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-z58mp (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  default-token-z58mp:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-z58mp
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/pid-pressure:NoSchedule
                 node.kubernetes.io/unreachable:NoExecute
                 node.kubernetes.io/unschedulable:NoSchedule
Events:
  Type     Reason            Age                     From               Message
  ----     ------            ----                    ----               -------
  Warning  FailedScheduling  5m12s (x96 over 6h57m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
  Warning  FailedScheduling  26s (x6 over 4m54s)     default-scheduler  0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
@iwilltry42

This comment has been minimized.

Copy link
Collaborator

@iwilltry42 iwilltry42 commented Oct 7, 2019

I'm pretty sure, that ports 80 and 443 are already taken by traefik.

@harshavardhanc

This comment has been minimized.

Copy link
Author

@harshavardhanc harshavardhanc commented Oct 15, 2019

Sorry for late reply @iwilltry42 was OOO, I was using k3d create --name cluster_name
You are right @iwilltry42, then I used this command without traefik k3d create --server-arg --no-deploy --server-arg traefik --name cluster_name

@iwilltry42

This comment has been minimized.

Copy link
Collaborator

@iwilltry42 iwilltry42 commented Oct 15, 2019

So it works without traefik? Can I go ahead and close this issue then @harshavardhanc ? 👍

@rjshrjndrn

This comment has been minimized.

Copy link

@rjshrjndrn rjshrjndrn commented Oct 19, 2019

@iwilltry42 Any idea why this happens? I think it'll be good to have this in the doc, in case somebody get blocked.

@harshavardhanc

This comment has been minimized.

Copy link
Author

@harshavardhanc harshavardhanc commented Oct 19, 2019

yes it works without traefik @iwilltry42

@harshavardhanc

This comment has been minimized.

Copy link
Author

@harshavardhanc harshavardhanc commented Oct 19, 2019

@iwilltry42

This comment has been minimized.

Copy link
Collaborator

@iwilltry42 iwilltry42 commented Oct 19, 2019

@rjshrjndrn yep, it's because of the Service Load Balancer, which reacts to services of type: LoadBalancer.
See the related k3s documentation: https://rancher.com/docs/k3s/latest/en/configuration/#service-load-balancer
If you don't need or want this feature, start the cluster with --server-arg '--no-deploy servicelb'.
Note: the pod that stays in Pending state is part of the k3s infrastructure, not part of the istio manifests/chart which you deployed.

Addition: Do both of you have a k3d cluster created with only a single node? (since the the controller tries to find a node, where the ports are free and obviously, there's none in a single node cluster, where traefik is already running and has the ports occupied)

@iwilltry42 iwilltry42 self-assigned this Oct 19, 2019
@iwilltry42 iwilltry42 added the question label Oct 19, 2019
@iwilltry42 iwilltry42 changed the title 1 node(s) didn't have free ports for the requested pod ports [HELP] 1 node(s) didn't have free ports for the requested pod ports Oct 19, 2019
@rjshrjndrn

This comment has been minimized.

Copy link

@rjshrjndrn rjshrjndrn commented Oct 19, 2019

@iwilltry42 I always run a k3d k3d with one node and without traefik.
For type: LoadBalancer, I always gets an ip. Usually I install istio for ingress and tinker over it.

Addition: Do both of you have a k3d cluster created with only a single node? (since the the controller tries to find a node, where the ports are free and obviously, there's none in a single node cluster, where traefik is already running and has the ports occupied)

Why can't traefik run in the same node. I don't think there's any toleration for traefik to run in the master itself.

Note: I tired the cluster with traefik and in one node, for me it works perfectly fine.

@iwilltry42

This comment has been minimized.

Copy link
Collaborator

@iwilltry42 iwilltry42 commented Oct 20, 2019

@rjshrjndrn , I'm not sure, I understand you correctly there.
But the issue, shown in the outputs posted here, say that there's a pod svclb-istio-ingressgateway-abcd stuck in Pending state.
This pod is not spawned by the istio manifests, it's spawned by a controller that is part of k3s. This controller is similar to MetalLB and for every kind: Service of type: LoadBalancer that you create in the cluster, it tries to find a node, where it can map the requested port from the node to the pod.
Now, if you create a cluster without the --no-deploy=traefik flag, you'll already have a pod svclb-traefik-abcd with two containers, which use hostPort: 80 and hostPort: 443 (meaning, that ports 80 and 443 on the node are in use).
Unfortunately, the svclb-istio-ingressgateway now needs exactly the same ports, but since those are already taken, it's stuck in Pending state.

@rjshrjndrn

This comment has been minimized.

Copy link

@rjshrjndrn rjshrjndrn commented Oct 22, 2019

Okay. Thank you @iwilltry42 for clarification. Makes sense now.
Basically it's clash of ingress, right !!!

@iwilltry42

This comment has been minimized.

Copy link
Collaborator

@iwilltry42 iwilltry42 commented Oct 22, 2019

Are there any questions left here or can I close this issue? 👍

@harshavardhanc

This comment has been minimized.

Copy link
Author

@harshavardhanc harshavardhanc commented Oct 22, 2019

You can close this issue @iwilltry42

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
4 participants
You can’t perform that action at this time.