Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubeadm failed to start kube-dns #38118

Closed
enm10k opened this issue Dec 5, 2016 · 4 comments
Closed

kubeadm failed to start kube-dns #38118

enm10k opened this issue Dec 5, 2016 · 4 comments

Comments

@enm10k
Copy link

enm10k commented Dec 5, 2016

Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.):

No

What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.):

kubeadm, kube-dns


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

BUG REPORT

Kubernetes version (use kubectl version):

# kubectl version
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.4", GitCommit:"3b417cc4ccd1b8f38ff9ec96bb50a81ca0ea9d56", GitTreeState:"clean", BuildDate:"2016-10-21T02:48:38Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.4", GitCommit:"3b417cc4ccd1b8f38ff9ec96bb50a81ca0ea9d56", GitTreeState:"clean", BuildDate:"2016-10-21T02:42:39Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: AWS
  • OS (e.g. from /etc/os-release):
# cat /etc/os-release
NAME="Ubuntu"
VERSION="16.04.1 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.1 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
UBUNTU_CODENAME=xenial
  • Kernel (e.g. uname -a):
# uname -a
Linux ip-10-1-24-223 4.4.0-51-generic #72-Ubuntu SMP Thu Nov 24 18:29:54 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools: kubeadm

  • Others:

What happened:

kubeadm failed to start kube-dns.

What you expected to happen:

kube-dns starts successfully.

How to reproduce it (as minimally and precisely as possible):

I followed the instructions. http://kubernetes.io/docs/getting-started-guides/kubeadm/

# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
# sudo sh -c "cat <<EOF > /etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF"
# sudo apt-get update -y
# sudo apt-get install -y docker.io kubelet kubeadm kubectl kubernetes-cni
# sudo kubeadm init

Anything else do we need to know:

# kubectl get pod --all-namespaces=true
NAMESPACE     NAME                              READY     STATUS              RESTARTS   AGE
kube-system   dummy-2088944543-qjcd9            1/1       Running             0          32s
kube-system   kube-apiserver-ip-10-1-24-144     1/1       Running             0          37s
kube-system   kube-discovery-1150918428-d2nsx   1/1       Running             0          31s
kube-system   kube-dns-654381707-vf4te          0/3       ContainerCreating   0          19s
kube-system   kube-proxy-5vjdd                  1/1       Running             0          19s
# kubectl get events -n kube-system
LASTSEEN   FIRSTSEEN   COUNT     NAME                                     KIND         SUBOBJECT                                  TYPE      REASON              SOURCE                     MESSAGE
1m         1m          1         dummy-2088944543-qjcd9                   Pod                                                     Normal    Scheduled           {default-scheduler }       Successfully assigned dummy-2088944543-qjcd9 to ip-10-1-24-144
1m         1m          1         dummy-2088944543-qjcd9                   Pod          spec.containers{dummy}                     Normal    Pulled              {kubelet ip-10-1-24-144}   Container image "gcr.io/google_containers/pause-amd64:3.0" already present on machine
1m         1m          1         dummy-2088944543-qjcd9                   Pod          spec.containers{dummy}                     Normal    Created             {kubelet ip-10-1-24-144}   Created container with docker id bee2cc05825c; Security:[seccomp=unconfined]
1m         1m          1         dummy-2088944543-qjcd9                   Pod          spec.containers{dummy}                     Normal    Started             {kubelet ip-10-1-24-144}   Started container with docker id bee2cc05825c
1m         1m          1         dummy-2088944543                         ReplicaSet                                              Normal    SuccessfulCreate    {replicaset-controller }   Created pod: dummy-2088944543-qjcd9
1m         1m          1         dummy                                    Deployment                                              Normal    ScalingReplicaSet   {deployment-controller }   Scaled up replica set dummy-2088944543 to 1
1m         1m          1         etcd-ip-10-1-24-144                      Pod          spec.containers{etcd}                      Normal    Pulling             {kubelet ip-10-1-24-144}   pulling image "gcr.io/google_containers/etcd-amd64:2.2.5"
1m         1m          1         etcd-ip-10-1-24-144                      Pod          spec.containers{etcd}                      Normal    Pulled              {kubelet ip-10-1-24-144}   Successfully pulled image "gcr.io/google_containers/etcd-amd64:2.2.5"
1m         1m          1         etcd-ip-10-1-24-144                      Pod          spec.containers{etcd}                      Normal    Created             {kubelet ip-10-1-24-144}   Created container with docker id 8f0fdf66dbb5; Security:[seccomp=unconfined]
1m         1m          1         etcd-ip-10-1-24-144                      Pod          spec.containers{etcd}                      Normal    Started             {kubelet ip-10-1-24-144}   Started container with docker id 8f0fdf66dbb5
1m         1m          1         kube-apiserver-ip-10-1-24-144            Pod          spec.containers{kube-apiserver}            Normal    Pulling             {kubelet ip-10-1-24-144}   pulling image "gcr.io/google_containers/kube-apiserver-amd64:v1.4.4"
1m         1m          1         kube-apiserver-ip-10-1-24-144            Pod          spec.containers{kube-apiserver}            Normal    Pulled              {kubelet ip-10-1-24-144}   Successfully pulled image "gcr.io/google_containers/kube-apiserver-amd64:v1.4.4"
1m         1m          1         kube-apiserver-ip-10-1-24-144            Pod          spec.containers{kube-apiserver}            Normal    Created             {kubelet ip-10-1-24-144}   Created container with docker id 0926c48e85b0; Security:[seccomp=unconfined]
1m         1m          1         kube-apiserver-ip-10-1-24-144            Pod          spec.containers{kube-apiserver}            Normal    Started             {kubelet ip-10-1-24-144}   Started container with docker id 0926c48e85b0
1m         1m          1         kube-controller-manager-ip-10-1-24-144   Pod          spec.containers{kube-controller-manager}   Normal    Pulling             {kubelet ip-10-1-24-144}   pulling image "gcr.io/google_containers/kube-controller-manager-amd64:v1.4.4"
1m         1m          1         kube-controller-manager-ip-10-1-24-144   Pod          spec.containers{kube-controller-manager}   Normal    Pulled              {kubelet ip-10-1-24-144}   Successfully pulled image "gcr.io/google_containers/kube-controller-manager-amd64:v1.4.4"
1m         1m          1         kube-controller-manager-ip-10-1-24-144   Pod          spec.containers{kube-controller-manager}   Normal    Created             {kubelet ip-10-1-24-144}   Created container with docker id 46485fd67302; Security:[seccomp=unconfined]
1m         1m          1         kube-controller-manager-ip-10-1-24-144   Pod          spec.containers{kube-controller-manager}   Normal    Started             {kubelet ip-10-1-24-144}   Started container with docker id 46485fd67302
1m         1m          1         kube-discovery-1150918428-d2nsx          Pod                                                     Normal    Scheduled           {default-scheduler }       Successfully assigned kube-discovery-1150918428-d2nsx to ip-10-1-24-144
1m         1m          1         kube-discovery-1150918428-d2nsx          Pod          spec.containers{kube-discovery}            Normal    Pulling             {kubelet ip-10-1-24-144}   pulling image "gcr.io/google_containers/kube-discovery-amd64:1.0"
1m         1m          1         kube-discovery-1150918428-d2nsx          Pod          spec.containers{kube-discovery}            Normal    Pulled              {kubelet ip-10-1-24-144}   Successfully pulled image "gcr.io/google_containers/kube-discovery-amd64:1.0"
1m         1m          1         kube-discovery-1150918428-d2nsx          Pod          spec.containers{kube-discovery}            Normal    Created             {kubelet ip-10-1-24-144}   Created container with docker id bc5677a97797; Security:[seccomp=unconfined]
1m         1m          1         kube-discovery-1150918428-d2nsx          Pod          spec.containers{kube-discovery}            Normal    Started             {kubelet ip-10-1-24-144}   Started container with docker id bc5677a97797
1m         1m          1         kube-discovery-1150918428                ReplicaSet                                              Normal    SuccessfulCreate    {replicaset-controller }   Created pod: kube-discovery-1150918428-d2nsx
1m         1m          1         kube-discovery                           Deployment                                              Normal    ScalingReplicaSet   {deployment-controller }   Scaled up replica set kube-discovery-1150918428 to 1
1m         1m          1         kube-dns-654381707-vf4te                 Pod                                                     Normal    Scheduled           {default-scheduler }       Successfully assigned kube-dns-654381707-vf4te to ip-10-1-24-144
1s         1m          68        kube-dns-654381707-vf4te                 Pod                                                     Warning   FailedSync          {kubelet ip-10-1-24-144}   Error syncing pod, skipping: failed to "SetupNetwork" for "kube-dns-654381707-vf4te_kube-system" with SetupNetworkError: "Failed to setup network for pod \"kube-dns-654381707-vf4te_kube-system(c37672ee-bb00-11e6-9cb2-06d39f467f73)\" using network plugins \"cni\": cni config unintialized; Skipping pod"

1m        1m        1         kube-dns-654381707              ReplicaSet                                     Normal    SuccessfulCreate    {replicaset-controller }   Created pod: kube-dns-654381707-vf4te
1m        1m        1         kube-dns                        Deployment                                     Normal    ScalingReplicaSet   {deployment-controller }   Scaled up replica set kube-dns-654381707 to 1
1m        1m        1         kube-proxy-5vjdd                Pod          spec.containers{kube-proxy}       Normal    Pulling             {kubelet ip-10-1-24-144}   pulling image "gcr.io/google_containers/kube-proxy-amd64:v1.4.4"
1m        1m        1         kube-proxy-5vjdd                Pod          spec.containers{kube-proxy}       Normal    Pulled              {kubelet ip-10-1-24-144}   Successfully pulled image "gcr.io/google_containers/kube-proxy-amd64:v1.4.4"
1m        1m        1         kube-proxy-5vjdd                Pod          spec.containers{kube-proxy}       Normal    Created             {kubelet ip-10-1-24-144}   Created container with docker id c24e0db7ab80; Security:[seccomp=unconfined]
59s       59s       1         kube-proxy-5vjdd                Pod          spec.containers{kube-proxy}       Normal    Started             {kubelet ip-10-1-24-144}   Started container with docker id c24e0db7ab80
1m        1m        1         kube-proxy                      DaemonSet                                      Normal    SuccessfulCreate    {daemon-set }              Created pod: kube-proxy-5vjdd
1m        1m        1         kube-scheduler-ip-10-1-24-144   Pod          spec.containers{kube-scheduler}   Normal    Pulling             {kubelet ip-10-1-24-144}   pulling image "gcr.io/google_containers/kube-scheduler-amd64:v1.4.4"
1m        1m        1         kube-scheduler-ip-10-1-24-144   Pod          spec.containers{kube-scheduler}   Normal    Pulled              {kubelet ip-10-1-24-144}   Successfully pulled image "gcr.io/google_containers/kube-scheduler-amd64:v1.4.4"
1m        1m        1         kube-scheduler-ip-10-1-24-144   Pod          spec.containers{kube-scheduler}   Normal    Created             {kubelet ip-10-1-24-144}   Created container with docker id 8f2d941c1e27; Security:[seccomp=unconfined]
1m        1m        1         kube-scheduler-ip-10-1-24-144   Pod          spec.containers{kube-scheduler}   Normal    Started             {kubelet ip-10-1-24-144}   Started container with docker id 8f2d941c1e27

1s 1m 68 kube-dns-654381707-vf4te Pod Warning FailedSync {kubelet ip-10-1-24-144} Error syncing pod, skipping: failed to "SetupNetwork" for "kube-dns-654381707-vf4te_kube-system" with SetupNetworkError: "Failed to setup network for pod \"kube-dns-654381707-vf4te_kube-system(c37672ee-bb00-11e6-9cb2-06d39f467f73)\" using network plugins \"cni\": cni config unintialized; Skipping pod" 🤔

I tried kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml, but kube-dns didn't start.

# kubectl get pod --all-namespaces=true
NAMESPACE     NAME                                     READY     STATUS              RESTARTS   AGE
kube-system   dummy-2088944543-qjcd9                   1/1       Running             0          1h
kube-system   etcd-ip-10-1-24-144                      1/1       Running             0          1h
kube-system   kube-apiserver-ip-10-1-24-144            1/1       Running             0          1h
kube-system   kube-controller-manager-ip-10-1-24-144   1/1       Running             0          1h
kube-system   kube-discovery-1150918428-d2nsx          1/1       Running             0          1h
kube-system   kube-dns-654381707-vf4te                 0/3       ContainerCreating   0          1h
kube-system   kube-flannel-ds-oxl8w                    2/2       Running             0          4m
kube-system   kube-proxy-5vjdd                         1/1       Running             0          1h
kube-system   kube-scheduler-ip-10-1-24-144            1/1       Running             0          1h
# kubectl get events -n kube-system
LASTSEEN   FIRSTSEEN   COUNT     NAME                                     KIND         SUBOBJECT                                  TYPE      REASON              SOURCE                     MESSAGE
59m        59m         1         dummy-2088944543-qjcd9                   Pod                                                     Normal    Scheduled           {default-scheduler }       Successfully assigned dummy-2088944543-qjcd9 to ip-10-1-24-144
59m        59m         1         dummy-2088944543-qjcd9                   Pod          spec.containers{dummy}                     Normal    Pulled              {kubelet ip-10-1-24-144}   Container image "gcr.io/google_containers/pause-amd64:3.0" already present on machine
59m        59m         1         dummy-2088944543-qjcd9                   Pod          spec.containers{dummy}                     Normal    Created             {kubelet ip-10-1-24-144}   Created container with docker id bee2cc05825c; Security:[seccomp=unconfined]
59m        59m         1         dummy-2088944543-qjcd9                   Pod          spec.containers{dummy}                     Normal    Started             {kubelet ip-10-1-24-144}   Started container with docker id bee2cc05825c
59m        59m         1         dummy-2088944543                         ReplicaSet                                              Normal    SuccessfulCreate    {replicaset-controller }   Created pod: dummy-2088944543-qjcd9
59m        59m         1         dummy                                    Deployment                                              Normal    ScalingReplicaSet   {deployment-controller }   Scaled up replica set dummy-2088944543 to 1
1h         1h          1         etcd-ip-10-1-24-144                      Pod          spec.containers{etcd}                      Normal    Pulling             {kubelet ip-10-1-24-144}   pulling image "gcr.io/google_containers/etcd-amd64:2.2.5"
1h         1h          1         etcd-ip-10-1-24-144                      Pod          spec.containers{etcd}                      Normal    Pulled              {kubelet ip-10-1-24-144}   Successfully pulled image "gcr.io/google_containers/etcd-amd64:2.2.5"
1h         1h          1         etcd-ip-10-1-24-144                      Pod          spec.containers{etcd}                      Normal    Created             {kubelet ip-10-1-24-144}   Created container with docker id 8f0fdf66dbb5; Security:[seccomp=unconfined]
1h         1h          1         etcd-ip-10-1-24-144                      Pod          spec.containers{etcd}                      Normal    Started             {kubelet ip-10-1-24-144}   Started container with docker id 8f0fdf66dbb5
1h         1h          1         kube-apiserver-ip-10-1-24-144            Pod          spec.containers{kube-apiserver}            Normal    Pulling             {kubelet ip-10-1-24-144}   pulling image "gcr.io/google_containers/kube-apiserver-amd64:v1.4.4"
1h         1h          1         kube-apiserver-ip-10-1-24-144            Pod          spec.containers{kube-apiserver}            Normal    Pulled              {kubelet ip-10-1-24-144}   Successfully pulled image "gcr.io/google_containers/kube-apiserver-amd64:v1.4.4"
1h         1h          1         kube-apiserver-ip-10-1-24-144            Pod          spec.containers{kube-apiserver}            Normal    Created             {kubelet ip-10-1-24-144}   Created container with docker id 0926c48e85b0; Security:[seccomp=unconfined]
1h         1h          1         kube-apiserver-ip-10-1-24-144            Pod          spec.containers{kube-apiserver}            Normal    Started             {kubelet ip-10-1-24-144}   Started container with docker id 0926c48e85b0
1h         1h          1         kube-controller-manager-ip-10-1-24-144   Pod          spec.containers{kube-controller-manager}   Normal    Pulling             {kubelet ip-10-1-24-144}   pulling image "gcr.io/google_containers/kube-controller-manager-amd64:v1.4.4"
1h         1h          1         kube-controller-manager-ip-10-1-24-144   Pod          spec.containers{kube-controller-manager}   Normal    Pulled              {kubelet ip-10-1-24-144}   Successfully pulled image "gcr.io/google_containers/kube-controller-manager-amd64:v1.4.4"
1h         1h          1         kube-controller-manager-ip-10-1-24-144   Pod          spec.containers{kube-controller-manager}   Normal    Created             {kubelet ip-10-1-24-144}   Created container with docker id 46485fd67302; Security:[seccomp=unconfined]
1h         1h          1         kube-controller-manager-ip-10-1-24-144   Pod          spec.containers{kube-controller-manager}   Normal    Started             {kubelet ip-10-1-24-144}   Started container with docker id 46485fd67302
59m        59m         1         kube-discovery-1150918428-d2nsx          Pod                                                     Normal    Scheduled           {default-scheduler }       Successfully assigned kube-discovery-1150918428-d2nsx to ip-10-1-24-144
59m        59m         1         kube-discovery-1150918428-d2nsx          Pod          spec.containers{kube-discovery}            Normal    Pulling             {kubelet ip-10-1-24-144}   pulling image "gcr.io/google_containers/kube-discovery-amd64:1.0"
59m        59m         1         kube-discovery-1150918428-d2nsx          Pod          spec.containers{kube-discovery}            Normal    Pulled              {kubelet ip-10-1-24-144}   Successfully pulled image "gcr.io/google_containers/kube-discovery-amd64:1.0"
59m        59m         1         kube-discovery-1150918428-d2nsx          Pod          spec.containers{kube-discovery}            Normal    Created             {kubelet ip-10-1-24-144}   Created container with docker id bc5677a97797; Security:[seccomp=unconfined]
59m        59m         1         kube-discovery-1150918428-d2nsx          Pod          spec.containers{kube-discovery}            Normal    Started             {kubelet ip-10-1-24-144}   Started container with docker id bc5677a97797
59m        59m         1         kube-discovery-1150918428                ReplicaSet                                              Normal    SuccessfulCreate    {replicaset-controller }   Created pod: kube-discovery-1150918428-d2nsx
59m        59m         1         kube-discovery                           Deployment                                              Normal    ScalingReplicaSet   {deployment-controller }   Scaled up replica set kube-discovery-1150918428 to 1
59m        59m         1         kube-dns-654381707-vf4te                 Pod                                                     Normal    Scheduled           {default-scheduler }       Successfully assigned kube-dns-654381707-vf4te to ip-10-1-24-144
1m         59m         3489      kube-dns-654381707-vf4te                 Pod                                                     Warning   FailedSync          {kubelet ip-10-1-24-144}   Error syncing pod, skipping: failed to "SetupNetwork" for "kube-dns-654381707-vf4te_kube-system" with SetupNetworkError: "Failed to setup network for pod \"kube-dns-654381707-vf4te_kube-system(c37672ee-bb00-11e6-9cb2-06d39f467f73)\" using network plugins \"cni\": cni config unintialized; Skipping pod"

1s        1m        70        kube-dns-654381707-vf4te   Pod                 Warning   FailedSync   {kubelet ip-10-1-24-144}   Error syncing pod, skipping: failed to "SetupNetwork" for "kube-dns-654381707-vf4te_kube-system" with SetupNetworkError: "Failed to setup network for pod \"kube-dns-654381707-vf4te_kube-system(c37672ee-bb00-11e6-9cb2-06d39f467f73)\" using network plugins \"cni\": open /run/flannel/subnet.env: no such file or directory; Skipping pod"

59m       59m       1         kube-dns-654381707              ReplicaSet                                     Normal    SuccessfulCreate    {replicaset-controller }   Created pod: kube-dns-654381707-vf4te
59m       59m       1         kube-dns                        Deployment                                     Normal    ScalingReplicaSet   {deployment-controller }   Scaled up replica set kube-dns-654381707 to 1
2m        2m        1         kube-flannel-ds-oxl8w           Pod          spec.containers{kube-flannel}     Normal    Pulling             {kubelet ip-10-1-24-144}   pulling image "quay.io/coreos/flannel-git:v0.6.1-28-g5dde68d-amd64"
1m        1m        1         kube-flannel-ds-oxl8w           Pod          spec.containers{kube-flannel}     Normal    Pulled              {kubelet ip-10-1-24-144}   Successfully pulled image "quay.io/coreos/flannel-git:v0.6.1-28-g5dde68d-amd64"
1m        1m        1         kube-flannel-ds-oxl8w           Pod          spec.containers{kube-flannel}     Normal    Created             {kubelet ip-10-1-24-144}   Created container with docker id 3f87aa05dd39; Security:[seccomp=unconfined]
1m        1m        1         kube-flannel-ds-oxl8w           Pod          spec.containers{kube-flannel}     Normal    Started             {kubelet ip-10-1-24-144}   Started container with docker id 3f87aa05dd39
1m        1m        1         kube-flannel-ds-oxl8w           Pod          spec.containers{install-cni}      Normal    Pulled              {kubelet ip-10-1-24-144}   Container image "quay.io/coreos/flannel-git:v0.6.1-28-g5dde68d-amd64" already present on machine
1m        1m        1         kube-flannel-ds-oxl8w           Pod          spec.containers{install-cni}      Normal    Created             {kubelet ip-10-1-24-144}   Created container with docker id 95c33cbcc458; Security:[seccomp=unconfined]
1m        1m        1         kube-flannel-ds-oxl8w           Pod          spec.containers{install-cni}      Normal    Started             {kubelet ip-10-1-24-144}   Started container with docker id 95c33cbcc458
2m        2m        1         kube-flannel-ds                 DaemonSet                                      Normal    SuccessfulCreate    {daemon-set }              Created pod: kube-flannel-ds-oxl8w
59m       59m       1         kube-proxy-5vjdd                Pod          spec.containers{kube-proxy}       Normal    Pulling             {kubelet ip-10-1-24-144}   pulling image "gcr.io/google_containers/kube-proxy-amd64:v1.4.4"
59m       59m       1         kube-proxy-5vjdd                Pod          spec.containers{kube-proxy}       Normal    Pulled              {kubelet ip-10-1-24-144}   Successfully pulled image "gcr.io/google_containers/kube-proxy-amd64:v1.4.4"
59m       59m       1         kube-proxy-5vjdd                Pod          spec.containers{kube-proxy}       Normal    Created             {kubelet ip-10-1-24-144}   Created container with docker id c24e0db7ab80; Security:[seccomp=unconfined]
59m       59m       1         kube-proxy-5vjdd                Pod          spec.containers{kube-proxy}       Normal    Started             {kubelet ip-10-1-24-144}   Started container with docker id c24e0db7ab80
59m       59m       1         kube-proxy                      DaemonSet                                      Normal    SuccessfulCreate    {daemon-set }              Created pod: kube-proxy-5vjdd
1h        1h        1         kube-scheduler-ip-10-1-24-144   Pod          spec.containers{kube-scheduler}   Normal    Pulling             {kubelet ip-10-1-24-144}   pulling image "gcr.io/google_containers/kube-scheduler-amd64:v1.4.4"
1h        1h        1         kube-scheduler-ip-10-1-24-144   Pod          spec.containers{kube-scheduler}   Normal    Pulled              {kubelet ip-10-1-24-144}   Successfully pulled image "gcr.io/google_containers/kube-scheduler-amd64:v1.4.4"
1h        1h        1         kube-scheduler-ip-10-1-24-144   Pod          spec.containers{kube-scheduler}   Normal    Created             {kubelet ip-10-1-24-144}   Created container with docker id 8f2d941c1e27; Security:[seccomp=unconfined]
1h        1h        1         kube-scheduler-ip-10-1-24-144   Pod          spec.containers{kube-scheduler}   Normal    Started             {kubelet ip-10-1-24-144}   Started container with docker id 8f2d941c1e27
@du2016
Copy link
Contributor

du2016 commented Dec 6, 2016

you need add the cni network first

@Zjianglin
Copy link

Zjianglin commented Dec 6, 2016

I meet a problem similar to this.I have already added the cni network, but the kube-dns container in Pod `kube-dns-654381707-k0bpz' is waiting due to CrashLoopBackOff.

root@kube-master:~# kc get pods --all-namespaces
NAMESPACE     NAME                                  READY     STATUS             RESTARTS   AGE
kube-system   dummy-2088944543-vfkb8                1/1       Running            0          17m
kube-system   etcd-kube-master                      1/1       Running            0          16m
kube-system   kube-apiserver-kube-master            1/1       Running            0          16m
kube-system   kube-controller-manager-kube-master   1/1       Running            0          16m
kube-system   kube-discovery-1150918428-f7jhy       1/1       Running            0          17m
kube-system   kube-dns-654381707-k0bpz              2/3       CrashLoopBackOff   7          17m
kube-system   kube-proxy-s3450                      1/1       Running            0          17m
kube-system   kube-scheduler-kube-master            1/1       Running            0          16m
kube-system   weave-net-jeidu                       2/2       Running            0          15m

root@kube-master:~$ kc describe pod kube-dns-654381707-k0bpz -n kube-system
Name:		kube-dns-654381707-k0bpz
Namespace:	kube-system
Node:		kube-master/10.239.47.100
Start Time:	Tue, 06 Dec 2016 13:15:25 +0800
Labels:		component=kube-dns
		k8s-app=kube-dns
		kubernetes.io/cluster-service=true
		name=kube-dns
		pod-template-hash=654381707
		tier=node
Status:		Running
IP:		10.32.0.1
Controllers:	ReplicaSet/kube-dns-654381707
Containers:
  kube-dns:
    Container ID:	docker://349e3676e52bb01d23a19aecc35fe35b1e723a0b2c5fe6186695ccfdb4a15668
    Image:		gcr.io/google_containers/kubedns-amd64:1.7
    Image ID:		docker://sha256:6cd03642b177cac7794660604e1be852642787205a71e86f187bddb944a52eec
    Ports:		10053/UDP, 10053/TCP
    Args:
      --domain=cluster.local
      --dns-port=10053
    Limits:
      cpu:	100m
      memory:	170Mi
    Requests:
      cpu:		100m
      memory:		170Mi
    State:		Running
      Started:		Tue, 06 Dec 2016 13:33:29 +0800
    Last State:		Terminated
      Reason:		Error
      Exit Code:	255
      Started:		Tue, 06 Dec 2016 13:27:06 +0800
      Finished:		Tue, 06 Dec 2016 13:28:16 +0800
    Ready:		False
    Restart Count:	8
    Liveness:		http-get http://:8080/healthz delay=60s timeout=5s period=10s #success=1 #failure=1
    Readiness:		http-get http://:8081/readiness delay=30s timeout=5s period=10s #success=1 #failure=3
    Volume Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-vzbwn (ro)
    Environment Variables:	<none>
  dnsmasq:
    Container ID:	docker://75f1d9c30269b177e5b29743880a277354503149ea67934af0d6bc4ce632957b
    Image:		gcr.io/google_containers/kube-dnsmasq-amd64:1.3
    Image ID:		docker://sha256:2a7c0456186f149bef32522e7bdd99d8f700e96d356b9bbdce0275f0b5ecee3f
    Ports:		53/UDP, 53/TCP
    Args:
      --cache-size=1000
      --no-resolv
      --server=127.0.0.1#10053
    Limits:
      cpu:	100m
      memory:	170Mi
    Requests:
      cpu:		100m
      memory:		170Mi
    State:		Running
      Started:		Tue, 06 Dec 2016 13:17:31 +0800
    Ready:		True
    Restart Count:	0
    Volume Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-vzbwn (ro)
    Environment Variables:	<none>
  healthz:
    Container ID:	docker://cc0605f476ee3f39213f9a0eb048ac5612bcda597cb7502482effc498d21e470
    Image:		gcr.io/google_containers/exechealthz-amd64:1.1
    Image ID:		docker://sha256:0e2effc928a57b292bd8d0b4377f4d6517695ed20776310403fa2ad9ccff8896
    Port:		8080/TCP
    Args:
      -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1:53 >/dev/null && nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null
      -port=8080
      -quiet
    Limits:
      cpu:	10m
      memory:	50Mi
    Requests:
      cpu:		10m
      memory:		50Mi
    State:		Running
      Started:		Tue, 06 Dec 2016 13:17:32 +0800
    Ready:		True
    Restart Count:	0
    Volume Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-vzbwn (ro)
    Environment Variables:	<none>
Conditions:
  Type		Status
  Initialized 	True 
  Ready 	False 
  PodScheduled 	True 
Volumes:
  default-token-vzbwn:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	default-token-vzbwn
QoS Class:	Guaranteed
Tolerations:	dedicated=master:NoSchedule
Events:
  FirstSeen	LastSeen	Count	From			SubobjectPath	Type		Reason		Message
  ---------	--------	-----	----			-------------	--------	------		-------
  18m		18m		1	{default-scheduler }			Normal		Scheduled	Successfully assigned kube-dns-654381707-k0bpz to kube-master
  18m		18m		1	{kubelet kube-master}			Warning		FailedSync	Error syncing pod, skipping: failed to "SetupNetwork" for "kube-dns-654381707-k0bpz_kube-system" with SetupNetworkError: "Failed to setup network for pod \"kube-dns-654381707-k0bpz_kube-system(feae328d-bb72-11e6-b14f-0019bb5451a0)\" using network plugins \"cni\": unable to allocate IP address: Post http://127.0.0.1:6784/ip/4f2d67d2b4dbc32eb1724c082d8f5f239fcea333e0532a65c43b592aedd10a6d: dial tcp 127.0.0.1:6784: getsockopt: connection refused; Skipping pod"

  18m	18m	1	{kubelet kube-master}		Warning	FailedSync	Error syncing pod, skipping: failed to "SetupNetwork" for "kube-dns-654381707-k0bpz_kube-system" with SetupNetworkError: "Failed to setup network for pod \"kube-dns-654381707-k0bpz_kube-system(feae328d-bb72-11e6-b14f-0019bb5451a0)\" using network plugins \"cni\": unable to allocate IP address: Post http://127.0.0.1:6784/ip/028ca940762cb20cd76084953f523d38f6709652fc4ad93b4865d343cba2c07f: dial tcp 127.0.0.1:6784: getsockopt: connection refused; Skipping pod"

  18m	18m	1	{kubelet kube-master}		Warning	FailedSync	Error syncing pod, skipping: failed to "SetupNetwork" for "kube-dns-654381707-k0bpz_kube-system" with SetupNetworkError: "Failed to setup network for pod \"kube-dns-654381707-k0bpz_kube-system(feae328d-bb72-11e6-b14f-0019bb5451a0)\" using network plugins \"cni\": unable to allocate IP address: Post http://127.0.0.1:6784/ip/6492310bba12f75aefe22d55640b7f2eb9f85e4b85ec4615f29afb43d0e9c967: dial tcp 127.0.0.1:6784: getsockopt: connection refused; Skipping pod"

  18m	18m	1	{kubelet kube-master}		Warning	FailedSync	Error syncing pod, skipping: failed to "SetupNetwork" for "kube-dns-654381707-k0bpz_kube-system" with SetupNetworkError: "Failed to setup network for pod \"kube-dns-654381707-k0bpz_kube-system(feae328d-bb72-11e6-b14f-0019bb5451a0)\" using network plugins \"cni\": unable to allocate IP address: Post http://127.0.0.1:6784/ip/432d71973d26007fdb6a88e6d4eee30ce0f78ce7afb1c9345e084c0f65e3df0f: dial tcp 127.0.0.1:6784: getsockopt: connection refused; Skipping pod"

  18m	18m	1	{kubelet kube-master}		Warning	FailedSync	Error syncing pod, skipping: failed to "SetupNetwork" for "kube-dns-654381707-k0bpz_kube-system" with SetupNetworkError: "Failed to setup network for pod \"kube-dns-654381707-k0bpz_kube-system(feae328d-bb72-11e6-b14f-0019bb5451a0)\" using network plugins \"cni\": unable to allocate IP address: Post http://127.0.0.1:6784/ip/f9a7fd6f700a79179ae8e0d736a1e321ddb4a73fec6a52513c1301420042eb77: dial tcp 127.0.0.1:6784: getsockopt: connection refused; Skipping pod"

  18m	18m	1	{kubelet kube-master}		Warning	FailedSync	Error syncing pod, skipping: failed to "SetupNetwork" for "kube-dns-654381707-k0bpz_kube-system" with SetupNetworkError: "Failed to setup network for pod \"kube-dns-654381707-k0bpz_kube-system(feae328d-bb72-11e6-b14f-0019bb5451a0)\" using network plugins \"cni\": unable to allocate IP address: Post http://127.0.0.1:6784/ip/7e3b05626679aa3837bbe3d2d5719f4207248d94b783e32501774fc54e31c32e: dial tcp 127.0.0.1:6784: getsockopt: connection refused; Skipping pod"

  18m	18m	1	{kubelet kube-master}		Warning	FailedSync	Error syncing pod, skipping: failed to "SetupNetwork" for "kube-dns-654381707-k0bpz_kube-system" with SetupNetworkError: "Failed to setup network for pod \"kube-dns-654381707-k0bpz_kube-system(feae328d-bb72-11e6-b14f-0019bb5451a0)\" using network plugins \"cni\": unable to allocate IP address: Post http://127.0.0.1:6784/ip/bfa5dc18f3e80f297fa38edf2de5a130a5b410fb783e157ae2da5c896b8dd043: dial tcp 127.0.0.1:6784: getsockopt: connection refused; Skipping pod"

  18m	18m	1	{kubelet kube-master}		Warning	FailedSync	Error syncing pod, skipping: failed to "SetupNetwork" for "kube-dns-654381707-k0bpz_kube-system" with SetupNetworkError: "Failed to setup network for pod \"kube-dns-654381707-k0bpz_kube-system(feae328d-bb72-11e6-b14f-0019bb5451a0)\" using network plugins \"cni\": unable to allocate IP address: Post http://127.0.0.1:6784/ip/d91f6da401e97e3afe5ffa3c6a8318248c19031464c3f72eb21ea79e3eafd35b: dial tcp 127.0.0.1:6784: getsockopt: connection refused; Skipping pod"

  17m	17m	1	{kubelet kube-master}		Warning	FailedSync	Error syncing pod, skipping: failed to "SetupNetwork" for "kube-dns-654381707-k0bpz_kube-system" with SetupNetworkError: "Failed to setup network for pod \"kube-dns-654381707-k0bpz_kube-system(feae328d-bb72-11e6-b14f-0019bb5451a0)\" using network plugins \"cni\": unable to allocate IP address: Post http://127.0.0.1:6784/ip/3538c9ab8f76eb1324435730b1ba63043a3db14ad3f006596f9ecb209a903269: dial tcp 127.0.0.1:6784: getsockopt: connection refused; Skipping pod"

  16m	16m	1	{kubelet kube-master}	spec.containers{dnsmasq}	Normal	Pulled		Container image "gcr.io/google_containers/kube-dnsmasq-amd64:1.3" already present on machine
  16m	16m	1	{kubelet kube-master}	spec.containers{dnsmasq}	Normal	Created		Created container with docker id 75f1d9c30269; Security:[seccomp=unconfined]
  16m	16m	1	{kubelet kube-master}	spec.containers{dnsmasq}	Normal	Started		Started container with docker id 75f1d9c30269
  16m	16m	1	{kubelet kube-master}	spec.containers{healthz}	Normal	Pulled		Container image "gcr.io/google_containers/exechealthz-amd64:1.1" already present on machine
  16m	16m	1	{kubelet kube-master}	spec.containers{healthz}	Normal	Started		Started container with docker id cc0605f476ee
  16m	16m	1	{kubelet kube-master}	spec.containers{healthz}	Normal	Created		Created container with docker id cc0605f476ee; Security:[seccomp=unconfined]
  16m	16m	1	{kubelet kube-master}	spec.containers{kube-dns}	Normal	Created		Created container with docker id f28e3902bd81; Security:[seccomp=unconfined]
  16m	16m	1	{kubelet kube-master}	spec.containers{kube-dns}	Normal	Started		Started container with docker id f28e3902bd81
  15m	15m	1	{kubelet kube-master}	spec.containers{kube-dns}	Normal	Started		Started container with docker id 1c6b850e44f6
  15m	15m	1	{kubelet kube-master}	spec.containers{kube-dns}	Normal	Killing		Killing container with docker id f28e3902bd81: pod "kube-dns-654381707-k0bpz_kube-system(feae328d-bb72-11e6-b14f-0019bb5451a0)" container "kube-dns" is unhealthy, it will be killed and re-created.
  15m	15m	1	{kubelet kube-master}	spec.containers{kube-dns}	Normal	Created		Created container with docker id 1c6b850e44f6; Security:[seccomp=unconfined]
  14m	14m	1	{kubelet kube-master}	spec.containers{kube-dns}	Normal	Killing		Killing container with docker id 1c6b850e44f6: pod "kube-dns-654381707-k0bpz_kube-system(feae328d-bb72-11e6-b14f-0019bb5451a0)" container "kube-dns" is unhealthy, it will be killed and re-created.
  14m	14m	1	{kubelet kube-master}	spec.containers{kube-dns}	Normal	Created		Created container with docker id cf5e1b9c30f7; Security:[seccomp=unconfined]
  14m	14m	1	{kubelet kube-master}	spec.containers{kube-dns}	Normal	Started		Started container with docker id cf5e1b9c30f7
  12m	12m	1	{kubelet kube-master}	spec.containers{kube-dns}	Normal	Killing		Killing container with docker id cf5e1b9c30f7: pod "kube-dns-654381707-k0bpz_kube-system(feae328d-bb72-11e6-b14f-0019bb5451a0)" container "kube-dns" is unhealthy, it will be killed and re-created.
  12m	12m	1	{kubelet kube-master}	spec.containers{kube-dns}	Normal	Created		Created container with docker id 00bfa3526e09; Security:[seccomp=unconfined]
  12m	12m	1	{kubelet kube-master}	spec.containers{kube-dns}	Normal	Started		Started container with docker id 00bfa3526e09
  11m	11m	1	{kubelet kube-master}	spec.containers{kube-dns}	Normal	Killing		Killing container with docker id 00bfa3526e09: pod "kube-dns-654381707-k0bpz_kube-system(feae328d-bb72-11e6-b14f-0019bb5451a0)" container "kube-dns" is unhealthy, it will be killed and re-created.
  11m	11m	1	{kubelet kube-master}	spec.containers{kube-dns}	Normal	Created		Created container with docker id ff4d385917f1; Security:[seccomp=unconfined]
  11m	11m	1	{kubelet kube-master}	spec.containers{kube-dns}	Normal	Started		Started container with docker id ff4d385917f1
  10m	10m	1	{kubelet kube-master}	spec.containers{kube-dns}	Normal	Killing		Killing container with docker id ff4d385917f1: pod "kube-dns-654381707-k0bpz_kube-system(feae328d-bb72-11e6-b14f-0019bb5451a0)" container "kube-dns" is unhealthy, it will be killed and re-created.
  10m	10m	1	{kubelet kube-master}	spec.containers{kube-dns}	Normal	Created		Created container with docker id f6f4f28a0725; Security:[seccomp=unconfined]
  10m	10m	1	{kubelet kube-master}	spec.containers{kube-dns}	Normal	Started		Started container with docker id f6f4f28a0725
  9m	9m	1	{kubelet kube-master}	spec.containers{kube-dns}	Normal	Killing		Killing container with docker id f6f4f28a0725: pod "kube-dns-654381707-k0bpz_kube-system(feae328d-bb72-11e6-b14f-0019bb5451a0)" container "kube-dns" is unhealthy, it will be killed and re-created.
  9m	8m	6	{kubelet kube-master}					Warning	FailedSync	Error syncing pod, skipping: failed to "StartContainer" for "kube-dns" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=kube-dns pod=kube-dns-654381707-k0bpz_kube-system(feae328d-bb72-11e6-b14f-0019bb5451a0)"

  7m	7m	1	{kubelet kube-master}	spec.containers{kube-dns}	Normal	Created		Created container with docker id f8d12a0a8de8; Security:[seccomp=unconfined]
  7m	7m	1	{kubelet kube-master}	spec.containers{kube-dns}	Normal	Started		Started container with docker id f8d12a0a8de8
  6m	6m	1	{kubelet kube-master}	spec.containers{kube-dns}	Normal	Killing		Killing container with docker id f8d12a0a8de8: pod "kube-dns-654381707-k0bpz_kube-system(feae328d-bb72-11e6-b14f-0019bb5451a0)" container "kube-dns" is unhealthy, it will be killed and re-created.
  15m	5m	8	{kubelet kube-master}	spec.containers{kube-dns}	Warning	Unhealthy	Liveness probe failed: Get http://10.32.0.1:8080/healthz: dial tcp 10.32.0.1:8080: getsockopt: connection refused
  15m	5m	32	{kubelet kube-master}	spec.containers{kube-dns}	Warning	Unhealthy	Readiness probe failed: Get http://10.32.0.1:8081/readiness: dial tcp 10.32.0.1:8081: getsockopt: connection refused
  17m	5m	39	{kubelet kube-master}					Warning	FailedSync	(events with common reason combined)
  5m	5m	1	{kubelet kube-master}	spec.containers{kube-dns}	Normal	Killing		Killing container with docker id 1aa35429513c: pod "kube-dns-654381707-k0bpz_kube-system(feae328d-bb72-11e6-b14f-0019bb5451a0)" container "kube-dns" is unhealthy, it will be killed and re-created.
  5m	36s	23	{kubelet kube-master}					Warning	FailedSync	Error syncing pod, skipping: failed to "StartContainer" for "kube-dns" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-dns pod=kube-dns-654381707-k0bpz_kube-system(feae328d-bb72-11e6-b14f-0019bb5451a0)"

  9m	36s	32	{kubelet kube-master}	spec.containers{kube-dns}	Warning	BackOff	Back-off restarting failed docker container
  16m	22s	9	{kubelet kube-master}	spec.containers{kube-dns}	Normal	Pulled	Container image "gcr.io/google_containers/kubedns-amd64:1.7" already present on machine
  6m	21s	2	{kubelet kube-master}	spec.containers{kube-dns}	Normal	Created	(events with common reason combined)
  6m	21s	2	{kubelet kube-master}	spec.containers{kube-dns}	Normal	Started	(events with common reason combined)

@enm10k
Copy link
Author

enm10k commented Dec 6, 2016

@du2016
Thank you for telling me.
I can solve my problem with following command.

$ sudo kubeadm reset
$ sudo systemctl restart kubelet.service
$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16
$ kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
configmap "kube-flannel-cfg" created
daemonset "kube-flannel-ds" created
$ kubectl get pod --all-namespaces
NAMESPACE     NAME                                     READY     STATUS    RESTARTS   AGE
kube-system   dummy-2088944543-puboy                   1/1       Running   0          6m
kube-system   etcd-ip-10-1-24-144                      1/1       Running   0          6m
kube-system   kube-apiserver-ip-10-1-24-144            1/1       Running   0          6m
kube-system   kube-controller-manager-ip-10-1-24-144   1/1       Running   0          6m
kube-system   kube-discovery-1150918428-gyuzi          1/1       Running   0          6m
kube-system   kube-dns-654381707-mblpk                 3/3       Running   0          6m
kube-system   kube-flannel-ds-hsg6v                    2/2       Running   0          3m
kube-system   kube-proxy-zi3cc                         1/1       Running   0          6m
kube-system   kube-scheduler-ip-10-1-24-144            1/1       Running   0          6m

I thought that I had already tried --pod-network-cidr=10.244.0.0/16 option, but I may have misunderstood.

@Zjianglin I will close this issue. Please make new issue yourself if needed.

@enm10k enm10k closed this as completed Dec 6, 2016
@pgnaleen
Copy link

in my case i didn't implemented flannel pod network correctly.
https://github.com/coreos/flannel

i ran this
kubeadm init --pod-network-cidr=10.244.0.0/16

then after setup is done. i ran following command
kubectl apply -f kube-flannel.yml
https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml

this is wrong if you also ran this command also run following command

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel-rbac.yml

or you can run following command with out above two.

kubectl apply -f kube-flannel-rbac.yml -f kube-flannel.yml
https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml
https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel-rbac.yml

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants