Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ingress Controller failed Scheduling due to NodeNetworkUnavailable #130

Closed
bdeak opened this issue Feb 22, 2018 · 2 comments
Closed

Ingress Controller failed Scheduling due to NodeNetworkUnavailable #130

bdeak opened this issue Feb 22, 2018 · 2 comments

Comments

@bdeak
Copy link

bdeak commented Feb 22, 2018

Hi,

I've deployed the ingress controller pod, but it's stuck in the Pending state, describe shows the following:

# kubectl describe pod kube-ingress-aws-controller-5959c8f9c7-hbmj6 -n kube-system
Name:           kube-ingress-aws-controller-5959c8f9c7-hbmj6
Namespace:      kube-system
Node:           <none>
Labels:         application=kube-ingress-aws-controller
                component=ingress
                pod-template-hash=1515749573
Annotations:    <none>
Status:         Pending
IP:
Controlled By:  ReplicaSet/kube-ingress-aws-controller-5959c8f9c7
Containers:
  controller:
    Image:  registry.opensource.zalan.do/teapot/kube-ingress-aws-controller:latest
    Port:   <none>
    Environment:
      AWS_REGION:  eu-central-1
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-9vlp7 (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  default-token-9vlp7:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-9vlp7
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  12m (x26 over 19m)  default-scheduler  0/4 nodes are available: 4 NodeNetworkUnavailable.
  Warning  FailedScheduling  6s (x36 over 10m)   default-scheduler  0/4 nodes are available: 4 NodeNetworkUnavailable.

I'm using calico as CNI, and is working as expected.

# calicoctl node status
Calico process is running.

IPv4 BGP status
+--------------+-------------------+-------+----------+-------------+
| PEER ADDRESS |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+--------------+-------------------+-------+----------+-------------+
| 10.1.112.114 | node-to-node mesh | up    | 18:54:43 | Established |
| 10.1.116.143 | node-to-node mesh | up    | 18:54:45 | Established |
| 10.1.122.222 | node-to-node mesh | up    | 18:54:46 | Established |
+--------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.
# kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                           READY     STATUS    RESTARTS   AGE       IP               NODE
default       hello-world-58f9949f8-ff5tt                    1/1       Running   6          1d        192.168.162.66   ip-10-1-122-222
default       hello-world-58f9949f8-n8c6x                    1/1       Running   6          1d        192.168.66.129   ip-10-1-112-114
kube-system   calico-etcd-d2k9n                              1/1       Running   1          1d        10.1.116.143     ip-10-1-116-143
kube-system   calico-etcd-j5kpc                              1/1       Running   4          1d        10.1.122.222     ip-10-1-122-222
kube-system   calico-etcd-vbbls                              1/1       Running   0          12m       10.1.115.250     ip-10-1-115-250
kube-system   calico-kube-controllers-d987c6db5-zcmfk        1/1       Running   10         1d        10.1.112.114     ip-10-1-112-114
kube-system   calico-node-9g2lh                              2/2       Running   0          3h        10.1.122.222     ip-10-1-122-222
kube-system   calico-node-qfr5x                              2/2       Running   0          3h        10.1.112.114     ip-10-1-112-114
kube-system   calico-node-xb54n                              2/2       Running   0          3h        10.1.116.143     ip-10-1-116-143
kube-system   calico-node-zwmzt                              2/2       Running   2          3h        10.1.115.250     ip-10-1-115-250
kube-system   etcd-ip-10-1-115-250                           1/1       Running   29         2d        10.1.115.250     ip-10-1-115-250
kube-system   etcd-ip-10-1-116-143                           1/1       Running   26         1d        10.1.116.143     ip-10-1-116-143
kube-system   etcd-ip-10-1-122-222                           1/1       Running   24         3d        10.1.122.222     ip-10-1-122-222
kube-system   kube-apiserver-ip-10-1-115-250                 1/1       Running   6          5h        10.1.115.250     ip-10-1-115-250
kube-system   kube-apiserver-ip-10-1-116-143                 1/1       Running   9          5h        10.1.116.143     ip-10-1-116-143
kube-system   kube-apiserver-ip-10-1-122-222                 1/1       Running   5          5h        10.1.122.222     ip-10-1-122-222
kube-system   kube-controller-manager-ip-10-1-115-250        1/1       Running   1          4h        10.1.115.250     ip-10-1-115-250
kube-system   kube-controller-manager-ip-10-1-116-143        1/1       Running   1          5h        10.1.116.143     ip-10-1-116-143
kube-system   kube-controller-manager-ip-10-1-122-222        1/1       Running   1          5h        10.1.122.222     ip-10-1-122-222
kube-system   kube-dns-ccf7b96b9-5xp69                       3/3       Running   3          23h       192.168.66.128   ip-10-1-112-114
kube-system   kube-ingress-aws-controller-5959c8f9c7-hbmj6   0/1       Pending   0          21m       <none>           <none>
kube-system   kube-proxy-4z4gw                               1/1       Running   1          1d        10.1.116.143     ip-10-1-116-143
kube-system   kube-proxy-9f44v                               1/1       Running   3          1d        10.1.122.222     ip-10-1-122-222
kube-system   kube-proxy-t2k68                               1/1       Running   3          1d        10.1.115.250     ip-10-1-115-250
kube-system   kube-proxy-zglpb                               1/1       Running   1          23h       10.1.112.114     ip-10-1-112-114
kube-system   kube-scheduler-ip-10-1-115-250                 1/1       Running   3          3d        10.1.115.250     ip-10-1-115-250
kube-system   kube-scheduler-ip-10-1-116-143                 1/1       Running   15         1d        10.1.116.143     ip-10-1-116-143
kube-system   kube-scheduler-ip-10-1-122-222                 1/1       Running   12         3d        10.1.122.222     ip-10-1-122-222
kube-system   kubernetes-dashboard-6658cd6658-qx89w          1/1       Running   7          1d        192.168.162.65   ip-10-1-122-222
kube-system   skipper-ingress-7knrk                          0/1       Running   0          3h        10.1.112.114     ip-10-1-112-114
kube-system   skipper-ingress-c9v8l                          0/1       Running   0          3h        10.1.122.222     ip-10-1-122-222
kube-system   skipper-ingress-hsjtm                          0/1       Running   0          3h        10.1.116.143     ip-10-1-116-143
kube-system   skipper-ingress-nxhgf                          0/1       Running   1          3h        10.1.115.250     ip-10-1-115-250

I've installed kubernetes using Puppet.
Any idea what might be missing?

Thanks!

@bdeak
Copy link
Author

bdeak commented Feb 22, 2018

I think this is unrelated from the ingress controller, I'll keep looking.

@bdeak
Copy link
Author

bdeak commented Feb 22, 2018

Closing, this was caused by kubernetes/kubernetes#44254

@bdeak bdeak closed this as completed Feb 22, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant