New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kube-dns - Failed create pod sandbox #56902

Closed
cspwizard opened this Issue Dec 6, 2017 · 17 comments

Comments

Projects
None yet
8 participants
@cspwizard

cspwizard commented Dec 6, 2017

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

/kind feature

What happened:

cluster initialized with kubeadm, kube-dns pods hang in ContainerCreating status. With failing pod sandbox creation.

What you expected to happen:

kube-dns pods running

How to reproduce it (as minimally and precisely as possible):
created cluster via kubeadm:

sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --token-ttl=0 --token=bd11ac.54147b1b3fd9620d --apiserver-cert-extra-sans=kube,kube.internal

kubectl taint nodes kube node-role.kubernetes.io/master-

with flannel network plugin:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

checked also with kube-router with same problem:

kubectl apply -f https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kubeadm-kuberouter.yaml

Anything else we need to know?:

NAMESPACE     NAME                           READY     STATUS              RESTARTS   AGE       IP            NODE
kube-system   etcd-kube                      1/1       Running             2          48m       10.10.10.12   kube
kube-system   kube-apiserver-kube            1/1       Running             2          48m       10.10.10.12   kube
kube-system   kube-controller-manager-kube   1/1       Running             2          49m       10.10.10.12   kube
kube-system   kube-dns-545bc4bfd4-bskll      0/3       ContainerCreating   0          49m       <none>        kube
kube-system   kube-flannel-ds-h9mcw          1/1       Running             2          43m       10.10.10.12   kube
kube-system   kube-proxy-f75q9               1/1       Running             2          49m       10.10.10.12   kube
kube-system   kube-scheduler-kube            1/1       Running             2          49m       10.10.10.12   kube
Name:           kube-dns-545bc4bfd4-bskll
Namespace:      kube-system
Node:           kube/10.10.10.12
Start Time:     Wed, 06 Dec 2017 23:58:55 +0300
Labels:         k8s-app=kube-dns
                pod-template-hash=1016706980
Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"kube-system","name":"kube-dns-545bc4bfd4","uid":"69d564df-dac7-11e7-9ac1-00155d02...
Status:         Pending
IP:
Created By:     ReplicaSet/kube-dns-545bc4bfd4
Controlled By:  ReplicaSet/kube-dns-545bc4bfd4
Containers:
  kubedns:
    Container ID:
    Image:         gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5
    Image ID:
    Ports:         10053/UDP, 10053/TCP, 10055/TCP
    Args:
      --domain=cluster.local.
      --dns-port=10053
      --config-dir=/kube-dns-config
      --v=2
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Limits:
      memory:  170Mi
    Requests:
      cpu:      100m
      memory:   70Mi
    Liveness:   http-get http://:10054/healthcheck/kubedns delay=60s timeout=5s period=10s #success=1 #failure=5
    Readiness:  http-get http://:8081/readiness delay=3s timeout=5s period=10s #success=1 #failure=3
    Environment:
      PROMETHEUS_PORT:  10055
    Mounts:
      /kube-dns-config from kube-dns-config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-jsxv8 (ro)
  dnsmasq:
    Container ID:
    Image:         gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5
    Image ID:
    Ports:         53/UDP, 53/TCP
    Args:
      -v=2
      -logtostderr
      -configDir=/etc/k8s/dns/dnsmasq-nanny
      -restartDnsmasq=true
      --
      -k
      --cache-size=1000
      --log-facility=-
      --server=/cluster.local/127.0.0.1#10053
      --server=/in-addr.arpa/127.0.0.1#10053
      --server=/ip6.arpa/127.0.0.1#10053
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:        150m
      memory:     20Mi
    Liveness:     http-get http://:10054/healthcheck/dnsmasq delay=60s timeout=5s period=10s #success=1 #failure=5
    Environment:  <none>
    Mounts:
      /etc/k8s/dns/dnsmasq-nanny from kube-dns-config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-jsxv8 (ro)
  sidecar:
    Container ID:
    Image:         gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.5
    Image ID:
    Port:          10054/TCP
    Args:
      --v=2
      --logtostderr
      --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A
      --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:        10m
      memory:     20Mi
    Liveness:     http-get http://:10054/metrics delay=60s timeout=5s period=10s #success=1 #failure=5
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-jsxv8 (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          False
  PodScheduled   True
Volumes:
  kube-dns-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-dns
    Optional:  true
  kube-dns-token-jsxv8:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kube-dns-token-jsxv8
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     CriticalAddonsOnly
                 node-role.kubernetes.io/master:NoSchedule
                 node.alpha.kubernetes.io/notReady:NoExecute for 300s
                 node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                  Age                 From               Message
  ----     ------                  ----                ----               -------
  Warning  FailedScheduling        42m (x22 over 47m)  default-scheduler  No nodes are available that match all of the predicates: NodeNotReady (1).
  Normal   Scheduled               41m                 default-scheduler  Successfully assigned kube-dns-545bc4bfd4-bskll to kube
  Normal   SuccessfulMountVolume   41m                 kubelet, kube      MountVolume.SetUp succeeded for volume "kube-dns-config"
  Normal   SuccessfulMountVolume   41m                 kubelet, kube      MountVolume.SetUp succeeded for volume "kube-dns-token-jsxv8"
  Warning  FailedCreatePodSandBox  41m                 kubelet, kube      Failed create pod sandbox.
  Warning  FailedSync              39m (x11 over 41m)  kubelet, kube      Error syncing pod
  Normal   SandboxChanged          36m (x25 over 41m)  kubelet, kube      Pod sandbox changed, it will be killed and re-created.
  Normal   SuccessfulMountVolume   30m                 kubelet, kube      MountVolume.SetUp succeeded for volume "kube-dns-config"
  Normal   SuccessfulMountVolume   30m                 kubelet, kube      MountVolume.SetUp succeeded for volume "kube-dns-token-jsxv8"
  Warning  FailedSync              28m (x11 over 30m)  kubelet, kube      Error syncing pod
  Normal   SandboxChanged          25m (x25 over 30m)  kubelet, kube      Pod sandbox changed, it will be killed and re-created.
  Normal   SuccessfulMountVolume   24m                 kubelet, kube      MountVolume.SetUp succeeded for volume "kube-dns-config"
  Normal   SuccessfulMountVolume   24m                 kubelet, kube      MountVolume.SetUp succeeded for volume "kube-dns-token-jsxv8"
  Warning  FailedSync              22m (x11 over 24m)  kubelet, kube      Error syncing pod
  Normal   SandboxChanged          4m (x94 over 24m)   kubelet, kube      Pod sandbox changed, it will be killed and re-created.
  Normal   SuccessfulMountVolume   3m                  kubelet, kube      MountVolume.SetUp succeeded for volume "kube-dns-config"
  Normal   SuccessfulMountVolume   3m                  kubelet, kube      MountVolume.SetUp succeeded for volume "kube-dns-token-jsxv8"
  Warning  FailedSync              1m (x11 over 3m)    kubelet, kube      Error syncing pod
  Normal   SandboxChanged          48s (x12 over 3m)   kubelet, kube      Pod sandbox changed, it will be killed and re-created.
Dec 07 10:30:44 kube kubelet[1209]: W1207 10:30:44.077574    1209 docker_sandbox.go:343] failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the status hook for pod "kube-dns-545bc4bfd4-bskll_kube-system": CNI failed to retrieve network namespace path: Cannot find network namespace for the terminated container "5aaa783222b550d34d697af24c6447a888e242bf45ea745acd5e845933b01ea6"
Dec 07 10:30:44 kube kubelet[1209]: W1207 10:30:44.078511    1209 cni.go:265] CNI failed to retrieve network namespace path: Cannot find network namespace for the terminated container "5aaa783222b550d34d697af24c6447a888e242bf45ea745acd5e845933b01ea6"
Dec 07 10:30:44 kube kubelet[1209]: E1207 10:30:44.078791    1209 cni.go:319] Error deleting network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 07 10:30:44 kube kubelet[1209]: E1207 10:30:44.079367    1209 remote_runtime.go:115] StopPodSandbox "5aaa783222b550d34d697af24c6447a888e242bf45ea745acd5e845933b01ea6" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod "kube-dns-545bc4bfd4-bskll_kube-system" network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 07 10:30:44 kube kubelet[1209]: E1207 10:30:44.079564    1209 kuberuntime_manager.go:781] Failed to stop sandbox {"docker" "5aaa783222b550d34d697af24c6447a888e242bf45ea745acd5e845933b01ea6"}
Dec 07 10:30:44 kube kubelet[1209]: E1207 10:30:44.079741    1209 kuberuntime_manager.go:581] killPodWithSyncResult failed: failed to "KillPodSandbox" for "69d619f2-dac7-11e7-9ac1-00155d02520b" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"kube-dns-545bc4bfd4-bskll_kube-system\" network: failed to find plugin \"portmap\" in path [/opt/flannel/bin /opt/cni/bin]"
Dec 07 10:30:44 kube kubelet[1209]: E1207 10:30:44.079912    1209 pod_workers.go:182] Error syncing pod 69d619f2-dac7-11e7-9ac1-00155d02520b ("kube-dns-545bc4bfd4-bskll_kube-system(69d619f2-dac7-11e7-9ac1-00155d02520b)"), skipping: failed to "KillPodSandbox" for "69d619f2-dac7-11e7-9ac1-00155d02520b" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"kube-dns-545bc4bfd4-bskll_kube-system\" network: failed to find plugin \"portmap\" in path [/opt/flannel/bin /opt/cni/bin]"
Dec 07 10:30:51 kube kubelet[1209]: W1207 10:30:51.086671    1209 helpers.go:847] eviction manager: no observation found for eviction signal allocatableNodeFs.available
Dec 07 10:30:56 kube kubelet[1209]: W1207 10:30:56.077148    1209 docker_sandbox.go:343] failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the status hook for pod "kube-dns-545bc4bfd4-bskll_kube-system": CNI failed to retrieve network namespace path: Cannot find network namespace for the terminated container "5aaa783222b550d34d697af24c6447a888e242bf45ea745acd5e845933b01ea6"
Dec 07 10:30:56 kube kubelet[1209]: W1207 10:30:56.077596    1209 cni.go:265] CNI failed to retrieve network namespace path: Cannot find network namespace for the terminated container "5aaa783222b550d34d697af24c6447a888e242bf45ea745acd5e845933b01ea6"
Dec 07 10:30:56 kube kubelet[1209]: E1207 10:30:56.077911    1209 cni.go:319] Error deleting network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 07 10:30:56 kube kubelet[1209]: E1207 10:30:56.078520    1209 remote_runtime.go:115] StopPodSandbox "5aaa783222b550d34d697af24c6447a888e242bf45ea745acd5e845933b01ea6" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod "kube-dns-545bc4bfd4-bskll_kube-system" network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 07 10:30:56 kube kubelet[1209]: E1207 10:30:56.078545    1209 kuberuntime_manager.go:781] Failed to stop sandbox {"docker" "5aaa783222b550d34d697af24c6447a888e242bf45ea745acd5e845933b01ea6"}
Dec 07 10:30:56 kube kubelet[1209]: E1207 10:30:56.078573    1209 kuberuntime_manager.go:581] killPodWithSyncResult failed: failed to "KillPodSandbox" for "69d619f2-dac7-11e7-9ac1-00155d02520b" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"kube-dns-545bc4bfd4-bskll_kube-system\" network: failed to find plugin \"portmap\" in path [/opt/flannel/bin /opt/cni/bin]"
Dec 07 10:30:56 kube kubelet[1209]: E1207 10:30:56.078590    1209 pod_workers.go:182] Error syncing pod 69d619f2-dac7-11e7-9ac1-00155d02520b ("kube-dns-545bc4bfd4-bskll_kube-system(69d619f2-dac7-11e7-9ac1-00155d02520b)"), skipping: failed to "KillPodSandbox" for "69d619f2-dac7-11e7-9ac1-00155d02520b" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"kube-dns-545bc4bfd4-bskll_kube-system\" network: failed to find plugin \"portmap\" in path [/opt/flannel/bin /opt/cni/bin]"
Dec 07 10:31:01 kube kubelet[1209]: W1207 10:31:01.096087    1209 helpers.go:847] eviction manager: no observation found for eviction signal allocatableNodeFs.available
Dec 07 10:31:09 kube kubelet[1209]: W1207 10:31:09.077450    1209 docker_sandbox.go:343] failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the status hook for pod "kube-dns-545bc4bfd4-bskll_kube-system": CNI failed to retrieve network namespace path: Cannot find network namespace for the terminated container "5aaa783222b550d34d697af24c6447a888e242bf45ea745acd5e845933b01ea6"
Dec 07 10:31:09 kube kubelet[1209]: W1207 10:31:09.078299    1209 cni.go:265] CNI failed to retrieve network namespace path: Cannot find network namespace for the terminated container "5aaa783222b550d34d697af24c6447a888e242bf45ea745acd5e845933b01ea6"
Dec 07 10:31:09 kube kubelet[1209]: E1207 10:31:09.078626    1209 cni.go:319] Error deleting network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]

Environment:

  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.4", GitCommit:"9befc2b8928a9426501d3bf62f72849d5cbcd5a3", GitTreeState:"clean", BuildDate:"2017-11-20T05:28:34Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.4", GitCommit:"9befc2b8928a9426501d3bf62f72849d5cbcd5a3", GitTreeState:"clean", BuildDate:"2017-11-20T05:17:43Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration:

vm hosted on hyper-v with 4 cores, 10240MB ram configuration

  • OS (e.g. from /etc/os-release):
NAME="Ubuntu"
VERSION="17.04 (Zesty Zapus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 17.04"
VERSION_ID="17.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=zesty
UBUNTU_CODENAME=zesty
  • Kernel (e.g. uname -a):
Linux kube 4.10.0-40-generic #44-Ubuntu SMP Thu Nov 9 14:49:09 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools:

kubeadm 1.8.4

  • Others:
docker version

Client:
 Version:      17.09.0-ce
 API version:  1.32
 Go version:   go1.8.3
 Git commit:   afdb6d4
 Built:        Tue Sep 26 22:42:45 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.09.0-ce
 API version:  1.32 (minimum version 1.12)
 Go version:   go1.8.3
 Git commit:   afdb6d4
 Built:        Tue Sep 26 22:41:24 2017
 OS/Arch:      linux/amd64
 Experimental: false
@feiskyer

This comment has been minimized.

Show comment
Hide comment
@feiskyer

feiskyer Dec 7, 2017

Member

/sig node

Member

feiskyer commented Dec 7, 2017

/sig node

@feiskyer

This comment has been minimized.

Show comment
Hide comment
@feiskyer

feiskyer Dec 7, 2017

Member

@cspwizard Could you check why sandbox is not up in kubelet logs ?

Member

feiskyer commented Dec 7, 2017

@cspwizard Could you check why sandbox is not up in kubelet logs ?

@cspwizard

This comment has been minimized.

Show comment
Hide comment
@cspwizard

cspwizard Dec 7, 2017

@feiskyer added part from logs, looks like there is some problem with flannel and cni. I'll check the logs with kube-router plugin and update with that too.

Well I checked the cni bin folder and it's really missing portmap. I think this is a bit strange as everything was working couple weeks ago when I started playing with k8s.

cspwizard commented Dec 7, 2017

@feiskyer added part from logs, looks like there is some problem with flannel and cni. I'll check the logs with kube-router plugin and update with that too.

Well I checked the cni bin folder and it's really missing portmap. I think this is a bit strange as everything was working couple weeks ago when I started playing with k8s.

@cspwizard

This comment has been minimized.

Show comment
Hide comment
@cspwizard

cspwizard Dec 7, 2017

When I've added missing cni binaries everything worked fine.
Seems this is a problem of wrong dependency for kubelet 1.8.4 it depends on kubernetes-cni 0.5.1. But it doesn't include portmap plugin.

cspwizard commented Dec 7, 2017

When I've added missing cni binaries everything worked fine.
Seems this is a problem of wrong dependency for kubelet 1.8.4 it depends on kubernetes-cni 0.5.1. But it doesn't include portmap plugin.

@feiskyer

This comment has been minimized.

Show comment
Hide comment
@feiskyer

feiskyer Dec 8, 2017

Member

@cspwizard portmap plugin is only included in CNI v0.6.

Member

feiskyer commented Dec 8, 2017

@cspwizard portmap plugin is only included in CNI v0.6.

@feiskyer

This comment has been minimized.

Show comment
Hide comment
@feiskyer

feiskyer Dec 8, 2017

Member

I think it will be formally included togather with kubernetes v1.9.

@cspwizard Close the issue since the problem has been solved?

Member

feiskyer commented Dec 8, 2017

I think it will be formally included togather with kubernetes v1.9.

@cspwizard Close the issue since the problem has been solved?

@cspwizard

This comment has been minimized.

Show comment
Hide comment
@cspwizard

cspwizard Dec 8, 2017

yeah, if that wouldn't be forgotten :)

cspwizard commented Dec 8, 2017

yeah, if that wouldn't be forgotten :)

@cspwizard cspwizard closed this Dec 8, 2017

@00mfg

This comment has been minimized.

Show comment
Hide comment
@00mfg

00mfg Dec 8, 2017

Dear cspwizard & feiskyer
I am also got the error cspwizard mentioned above, and i am a flash guy to try k8s,could you pls give me
more detail about how to solve the problem,thanks

lc@k8s-master:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 6h v1.8.5
node1 Ready 6h v1.8.5
node2 Ready 6h v1.8.5

lc@k8s-master:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-k8s-master 1/1 Running 0 6h
kube-system kube-apiserver-k8s-master 1/1 Running 0 6h
kube-system kube-controller-manager-k8s-master 1/1 Running 0 6h
kube-system kube-dns-545bc4bfd4-lc8c8 0/3 ContainerCreating 0 6h
kube-system kube-flannel-ds-hglf5 1/1 Running 0 6h
kube-system kube-flannel-ds-slzmc 1/1 Running 0 6h
kube-system kube-flannel-ds-zqr2w 1/1 Running 0 6h
kube-system kube-proxy-dhw29 1/1 Running 0 6h
kube-system kube-proxy-qlwrv 1/1 Running 0 6h
kube-system kube-proxy-sxq6t 1/1 Running 0 6h
kube-system kube-scheduler-k8s-master 1/1 Running 0 6h

Events:
Type Reason Age From Message


Normal SandboxChanged 4m (x675 over 6h) kubelet, k8s-master Pod sandbox changed, it will be killed and re-crea

00mfg commented Dec 8, 2017

Dear cspwizard & feiskyer
I am also got the error cspwizard mentioned above, and i am a flash guy to try k8s,could you pls give me
more detail about how to solve the problem,thanks

lc@k8s-master:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 6h v1.8.5
node1 Ready 6h v1.8.5
node2 Ready 6h v1.8.5

lc@k8s-master:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-k8s-master 1/1 Running 0 6h
kube-system kube-apiserver-k8s-master 1/1 Running 0 6h
kube-system kube-controller-manager-k8s-master 1/1 Running 0 6h
kube-system kube-dns-545bc4bfd4-lc8c8 0/3 ContainerCreating 0 6h
kube-system kube-flannel-ds-hglf5 1/1 Running 0 6h
kube-system kube-flannel-ds-slzmc 1/1 Running 0 6h
kube-system kube-flannel-ds-zqr2w 1/1 Running 0 6h
kube-system kube-proxy-dhw29 1/1 Running 0 6h
kube-system kube-proxy-qlwrv 1/1 Running 0 6h
kube-system kube-proxy-sxq6t 1/1 Running 0 6h
kube-system kube-scheduler-k8s-master 1/1 Running 0 6h

Events:
Type Reason Age From Message


Normal SandboxChanged 4m (x675 over 6h) kubelet, k8s-master Pod sandbox changed, it will be killed and re-crea

@cspwizard

This comment has been minimized.

Show comment
Hide comment
@cspwizard

cspwizard Dec 8, 2017

@00mfg Check the kubelet logs, if the problem is about portmap just download the 0.6 version of CNI (https://github.com/containernetworking/cni/releases) for your platform and put it to cni folder (/opt/cni/bin )

cspwizard commented Dec 8, 2017

@00mfg Check the kubelet logs, if the problem is about portmap just download the 0.6 version of CNI (https://github.com/containernetworking/cni/releases) for your platform and put it to cni folder (/opt/cni/bin )

@00mfg

This comment has been minimized.

Show comment
Hide comment
@00mfg

00mfg Dec 11, 2017

@cspwizard got it and resolved problem,thx!

00mfg commented Dec 11, 2017

@cspwizard got it and resolved problem,thx!

@adnavare

This comment has been minimized.

Show comment
Hide comment
@adnavare

adnavare Dec 15, 2017

Contributor

Hi all, i faced the similar issue of unable to find portmap plugin. How did you fixed it? I used https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml#L107 that uses flannel image of 0.9.1. I think this is the latest. Shall i upgrade kubernetes? Right now I am running K8S 1.8.5

Contributor

adnavare commented Dec 15, 2017

Hi all, i faced the similar issue of unable to find portmap plugin. How did you fixed it? I used https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml#L107 that uses flannel image of 0.9.1. I think this is the latest. Shall i upgrade kubernetes? Right now I am running K8S 1.8.5

@00mfg

This comment has been minimized.

Show comment
Hide comment
@00mfg

00mfg Dec 15, 2017

you just need download portmap from https://github.com/projectcalico/cni-plugin/releases/download/v1.9.1/portmap,than put it to /opt/cni/bin
i am runing 1.8.5 and solve the issues

00mfg commented Dec 15, 2017

you just need download portmap from https://github.com/projectcalico/cni-plugin/releases/download/v1.9.1/portmap,than put it to /opt/cni/bin
i am runing 1.8.5 and solve the issues

@adnavare

This comment has been minimized.

Show comment
Hide comment
@adnavare

adnavare Dec 15, 2017

Contributor

I copied the v0.6 from the above link that gave me to programs - cnitool and noop. Copying both inside /opt/cni/bin did not work for me

Contributor

adnavare commented Dec 15, 2017

I copied the v0.6 from the above link that gave me to programs - cnitool and noop. Copying both inside /opt/cni/bin did not work for me

@adnavare

This comment has been minimized.

Show comment
Hide comment
@adnavare

adnavare Dec 15, 2017

Contributor

Thanks it worked 👍

Contributor

adnavare commented Dec 15, 2017

Thanks it worked 👍

@akhilyendluri

This comment has been minimized.

Show comment
Hide comment
@akhilyendluri

akhilyendluri Jun 26, 2018

Hi,

I am facing a similar issue.

I created a cluster in aws eks using terraform script given in https://www.terraform.io/docs/providers/aws/guides/eks-getting-started.html#guide-overview

After this I was deploying the guestbook application as given in https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html

Once my pods got created I noticed that they where not ready. I saw this in my
kubectl describe pods

Name:           redis-master-8rqtb
Namespace:      default
Node:           ip-10-0-0-232.ec2.internal/10.0.0.232
Start Time:     Tue, 26 Jun 2018 13:18:22 -0400
Labels:         app=redis
                role=master
Annotations:    <none>
Status:         Pending
IP:
Controlled By:  ReplicationController/redis-master
Containers:
  redis-master:
    Container ID:
    Image:          redis:2.8.23
    Image ID:
    Port:           6379/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-4xr62 (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          False
  PodScheduled   True
Volumes:
  default-token-4xr62:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-4xr62
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s

 Events:
 Type                       Reason                  Age                  From                                        Message

 Warning  FailedCreatePodSandBox     32m       kubelet, ip-10-0-0-232.ec2.internal   Failed create pod sandbox: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "redis-master-8rqtb_default" network: rpc error: code = Unavailable desc = grpc: the connection is unavailable
 Normal   SandboxChanged          2m (x2994 over 1h)   kubelet, ip-10-0-0-232.ec2.internal  Pod sandbox changed, it will be killed and re-created.

I tried adding flannel using kubectl apply -f kube-flannel.yml

Even then I was not getting the issue.

I tried to use journalctl -u kubelet to get the kubelet logs. But the logs where empty.

akhil@ubuntu:~$ kubectl version --short

Client Version: v1.10.3
Server Version: v1.10.3

akhil@ubuntu:~$ uname -a

Linux ubuntu 4.4.0-87-generic #110-Ubuntu SMP Tue Jul 18 12:55:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

akhil@ubuntu:~$ kubectl get pods --all-namespaces

NAMESPACE     NAME                        READY     STATUS              RESTARTS   AGE
default       redis-master-8rqtb          0/1       ContainerCreating   0          1h
kube-system   aws-node-28gkv              0/1       CrashLoopBackOff    18         1h
kube-system   aws-node-bfvf2              0/1       CrashLoopBackOff    18         1h
kube-system   aws-node-kn759              0/1       CrashLoopBackOff    18         1h
kube-system   kube-dns-64b69465b4-b8p84   0/3       ContainerCreating   0          2h
kube-system   kube-proxy-5686g            1/1       Running             0          1h
kube-system   kube-proxy-hwd29            1/1       Running             0          1h
kube-system   kube-proxy-m5854            1/1       Running             0          1h

When I tried getting the logs

akhil@ubuntu:~$ journalctl -u kubelet

I got

-- No entries --

Also running kubectl logs aws-node-bfvf2 -n kube-system

=====Starting installing AWS-CNI =========
=====Starting amazon-k8s-agent ===========
ERROR: logging before flag.Parse: W0626 19:18:29.099664      14 client_config.go:533] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.

akhilyendluri commented Jun 26, 2018

Hi,

I am facing a similar issue.

I created a cluster in aws eks using terraform script given in https://www.terraform.io/docs/providers/aws/guides/eks-getting-started.html#guide-overview

After this I was deploying the guestbook application as given in https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html

Once my pods got created I noticed that they where not ready. I saw this in my
kubectl describe pods

Name:           redis-master-8rqtb
Namespace:      default
Node:           ip-10-0-0-232.ec2.internal/10.0.0.232
Start Time:     Tue, 26 Jun 2018 13:18:22 -0400
Labels:         app=redis
                role=master
Annotations:    <none>
Status:         Pending
IP:
Controlled By:  ReplicationController/redis-master
Containers:
  redis-master:
    Container ID:
    Image:          redis:2.8.23
    Image ID:
    Port:           6379/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-4xr62 (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          False
  PodScheduled   True
Volumes:
  default-token-4xr62:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-4xr62
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s

 Events:
 Type                       Reason                  Age                  From                                        Message

 Warning  FailedCreatePodSandBox     32m       kubelet, ip-10-0-0-232.ec2.internal   Failed create pod sandbox: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "redis-master-8rqtb_default" network: rpc error: code = Unavailable desc = grpc: the connection is unavailable
 Normal   SandboxChanged          2m (x2994 over 1h)   kubelet, ip-10-0-0-232.ec2.internal  Pod sandbox changed, it will be killed and re-created.

I tried adding flannel using kubectl apply -f kube-flannel.yml

Even then I was not getting the issue.

I tried to use journalctl -u kubelet to get the kubelet logs. But the logs where empty.

akhil@ubuntu:~$ kubectl version --short

Client Version: v1.10.3
Server Version: v1.10.3

akhil@ubuntu:~$ uname -a

Linux ubuntu 4.4.0-87-generic #110-Ubuntu SMP Tue Jul 18 12:55:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

akhil@ubuntu:~$ kubectl get pods --all-namespaces

NAMESPACE     NAME                        READY     STATUS              RESTARTS   AGE
default       redis-master-8rqtb          0/1       ContainerCreating   0          1h
kube-system   aws-node-28gkv              0/1       CrashLoopBackOff    18         1h
kube-system   aws-node-bfvf2              0/1       CrashLoopBackOff    18         1h
kube-system   aws-node-kn759              0/1       CrashLoopBackOff    18         1h
kube-system   kube-dns-64b69465b4-b8p84   0/3       ContainerCreating   0          2h
kube-system   kube-proxy-5686g            1/1       Running             0          1h
kube-system   kube-proxy-hwd29            1/1       Running             0          1h
kube-system   kube-proxy-m5854            1/1       Running             0          1h

When I tried getting the logs

akhil@ubuntu:~$ journalctl -u kubelet

I got

-- No entries --

Also running kubectl logs aws-node-bfvf2 -n kube-system

=====Starting installing AWS-CNI =========
=====Starting amazon-k8s-agent ===========
ERROR: logging before flag.Parse: W0626 19:18:29.099664      14 client_config.go:533] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.

@bilby91

This comment has been minimized.

Show comment
Hide comment
@bilby91

bilby91 Jul 4, 2018

@akhilyendluri I'm in the same situation as yours. Trying to run a simple demo app using EKS. kube-dns nor my demo service can start due to the same issues.

I tried recreating kube-dns but the same issue happens.

Failed create pod sandbox: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "kube-dns-7cc87d595-dr6bw_kube-system" network: rpc error: code = Unavailable desc = grpc: the connection is unavailable

bilby91 commented Jul 4, 2018

@akhilyendluri I'm in the same situation as yours. Trying to run a simple demo app using EKS. kube-dns nor my demo service can start due to the same issues.

I tried recreating kube-dns but the same issue happens.

Failed create pod sandbox: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "kube-dns-7cc87d595-dr6bw_kube-system" network: rpc error: code = Unavailable desc = grpc: the connection is unavailable
@bilby91

This comment has been minimized.

Show comment
Hide comment
@bilby91

bilby91 Jul 4, 2018

I've managed to solve my issue. I had forgotten to use the same ControlPlane security group between cluster and nodes. After recreating the cluster with the correct security group everything started working as expected.

bilby91 commented Jul 4, 2018

I've managed to solve my issue. I had forgotten to use the same ControlPlane security group between cluster and nodes. After recreating the cluster with the correct security group everything started working as expected.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment