Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

none: coredns CrashLoopBackOff: dial tcp ip:443: connect: no route to host #4350

Closed
fabstao opened this issue May 24, 2019 · 27 comments
Closed
Labels
cause/firewall-or-proxy When firewalls or proxies seem to be interfering co/coredns CoreDNS related issues co/none-driver kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@fabstao
Copy link

fabstao commented May 24, 2019

The exact command to reproduce the issue:
kubectl get pods --all-namespaces

The full output of the command that failed:

NAMESPACE     NAME                               READY   STATUS             RESTARTS   AGE
kube-system   coredns-fb8b8dccf-tn8vz            0/1     CrashLoopBackOff   4          110s
kube-system   coredns-fb8b8dccf-z28dc            0/1     CrashLoopBackOff   4          110s
kube-system   etcd-minikube                      1/1     Running            0          58s
kube-system   kube-addon-manager-minikube        1/1     Running            0          48s
kube-system   kube-apiserver-minikube            1/1     Running            0          52s
kube-system   kube-controller-manager-minikube   1/1     Running            0          41s
kube-system   kube-proxy-wb9bj                   1/1     Running            0          110s
kube-system   kube-scheduler-minikube            1/1     Running            0          41s
kube-system   storage-provisioner                1/1     Running            0          109s

The output of the minikube logs command:

[root@fabsnuc ~]# minikube logs
==> coredns <==
E0524 17:47:43.607851       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: no route to host
E0524 17:47:43.607851       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:322: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: no route to host
log: exiting because of error: log: cannot create log: open /tmp/coredns.coredns-fb8b8dccf-tn8vz.unknownuser.log.ERROR.20190524-174743.1: no such file or directory

==> dmesg <==
dmesg: invalid option -- '='

Usage:
 dmesg [options]

Options:
 -C, --clear                 clear the kernel ring buffer
 -c, --read-clear            read and clear all messages
 -D, --console-off           disable printing messages to console
 -d, --show-delta            show time delta between printed messages
 -e, --reltime               show local time and time delta in readable format
 -E, --console-on            enable printing messages to console
 -F, --file <file>           use the file instead of the kernel log buffer
 -f, --facility <list>       restrict output to defined facilities
 -H, --human                 human readable output
 -k, --kernel                display kernel messages
 -L, --color                 colorize messages
 -l, --level <list>          restrict output to defined levels
 -n, --console-level <level> set level of messages printed to console
 -P, --nopager               do not pipe output into a pager
 -r, --raw                   print the raw message buffer
 -S, --syslog                force to use syslog(2) rather than /dev/kmsg
 -s, --buffer-size <size>    buffer size to query the kernel ring buffer
 -T, --ctime                 show human readable timestamp (could be 
                               inaccurate if you have used SUSPEND/RESUME)
 -t, --notime                don't print messages timestamp
 -u, --userspace             display userspace messages
 -w, --follow                wait for new messages
 -x, --decode                decode facility and level to readable string

 -h, --help     display this help and exit
 -V, --version  output version information and exit

Supported log facilities:
    kern - kernel messages
    user - random user-level messages
    mail - mail system
  daemon - system daemons
    auth - security/authorization messages
  syslog - messages generated internally by syslogd
     lpr - line printer subsystem
    news - network news subsystem

Supported log levels (priorities):
   emerg - system is unusable
   alert - action must be taken immediately
    crit - critical conditions
     err - error conditions
    warn - warning conditions
  notice - normal but significant condition
    info - informational
   debug - debug-level messages


For more details see dmesg(q).

==> kernel <==
 12:48:49 up 12:26,  3 users,  load average: 0.21, 0.23, 0.20
Linux fabsnuc.intel.com 3.10.0-957.12.2.el7.x86_64 #1 SMP Tue May 14 21:24:32 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

==> kube-addon-manager <==
WRN: == Error getting default service account, retry in 0.5 second ==
Error from server (NotFound): serviceaccounts "default" not found
WRN: == Error getting default service account, retry in 0.5 second ==
Error from server (NotFound): serviceaccounts "default" not found
WRN: == Error getting default service account, retry in 0.5 second ==
Error from server (NotFound): serviceaccounts "default" not found
WRN: == Error getting default service account, retry in 0.5 second ==
Error from server (NotFound): serviceaccounts "default" not found
WRN: == Error getting default service account, retry in 0.5 second ==
INFO: == Default service account in the kube-system namespace has token default-token-qb9ck ==
find: '/etc/kubernetes/admission-controls': No such file or directory
INFO: == Entering periodical apply loop at 2019-05-24T17:44:29+00:00 ==
INFO: Leader is fabsnuc.intel.com
clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
storageclass.storage.k8s.io/standard created
INFO: == Kubernetes addon ensure completed at 2019-05-24T17:44:29+00:00 ==
INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner created
pod/storage-provisioner created
INFO: == Kubernetes addon reconcile completed at 2019-05-24T17:44:32+00:00 ==
INFO: Leader is fabsnuc.intel.com
INFO: == Kubernetes addon ensure completed at 2019-05-24T17:45:29+00:00 ==
INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-05-24T17:45:30+00:00 ==
INFO: Leader is fabsnuc.intel.com
INFO: == Kubernetes addon ensure completed at 2019-05-24T17:46:29+00:00 ==
INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-05-24T17:46:31+00:00 ==
INFO: Leader is fabsnuc.intel.com
INFO: == Kubernetes addon ensure completed at 2019-05-24T17:47:29+00:00 ==
INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-05-24T17:47:30+00:00 ==
INFO: Leader is fabsnuc.intel.com
INFO: == Kubernetes addon ensure completed at 2019-05-24T17:48:29+00:00 ==
INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-05-24T17:48:31+00:00 ==

==> kube-apiserver <==
I0524 17:44:20.171744       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I0524 17:44:20.213052       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I0524 17:44:20.252888       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I0524 17:44:20.293107       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I0524 17:44:20.333042       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I0524 17:44:20.373102       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I0524 17:44:20.413045       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I0524 17:44:20.453050       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I0524 17:44:20.493174       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I0524 17:44:20.532906       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I0524 17:44:20.572937       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I0524 17:44:20.613066       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I0524 17:44:20.652852       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I0524 17:44:20.693204       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I0524 17:44:20.731879       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I0524 17:44:20.773076       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I0524 17:44:20.816392       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I0524 17:44:20.852923       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I0524 17:44:21.145954       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I0524 17:44:21.149326       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I0524 17:44:21.152939       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I0524 17:44:21.156458       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I0524 17:44:21.159898       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I0524 17:44:21.163538       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I0524 17:44:21.167086       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I0524 17:44:21.172823       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I0524 17:44:21.212745       1 storage_rbac.go:223] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I0524 17:44:21.251844       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0524 17:44:21.253898       1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I0524 17:44:21.292135       1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0524 17:44:21.331692       1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0524 17:44:21.372887       1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0524 17:44:21.388219       1 controller.go:606] quota admission added evaluator for: endpoints
I0524 17:44:21.412944       1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0524 17:44:21.453044       1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0524 17:44:21.495019       1 storage_rbac.go:254] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I0524 17:44:21.531542       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I0524 17:44:21.533395       1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I0524 17:44:21.573147       1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I0524 17:44:21.613109       1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I0524 17:44:21.653065       1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I0524 17:44:21.692961       1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I0524 17:44:21.732967       1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I0524 17:44:21.773047       1 storage_rbac.go:284] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
W0524 17:44:21.791574       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.0.159]
I0524 17:44:22.361003       1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0524 17:44:22.965355       1 controller.go:606] quota admission added evaluator for: deployments.apps
I0524 17:44:23.308886       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0524 17:44:29.163799       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0524 17:44:29.282538       1 controller.go:606] quota admission added evaluator for: replicasets.apps

==> kube-proxy <==
W0524 17:44:30.645444       1 proxier.go:498] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0524 17:44:30.651474       1 server_others.go:267] Flag proxy-mode="" unknown, assuming iptables proxy
I0524 17:44:30.656952       1 server_others.go:146] Using iptables Proxier.
W0524 17:44:30.657029       1 proxier.go:319] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0524 17:44:30.657135       1 server.go:562] Version: v1.14.2
I0524 17:44:30.669112       1 conntrack.go:52] Setting nf_conntrack_max to 262144
I0524 17:44:30.669356       1 config.go:202] Starting service config controller
I0524 17:44:30.669383       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I0524 17:44:30.670178       1 config.go:102] Starting endpoints config controller
I0524 17:44:30.670192       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I0524 17:44:30.769721       1 controller_utils.go:1034] Caches are synced for service config controller
I0524 17:44:30.770421       1 controller_utils.go:1034] Caches are synced for endpoints config controller

==> kube-scheduler <==
I0524 17:44:15.952574       1 serving.go:319] Generated self-signed cert in-memory
W0524 17:44:16.280112       1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0524 17:44:16.280124       1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0524 17:44:16.280153       1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0524 17:44:16.281836       1 server.go:142] Version: v1.14.2
I0524 17:44:16.281876       1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W0524 17:44:16.283073       1 authorization.go:47] Authorization is disabled
W0524 17:44:16.283082       1 authentication.go:55] Authentication is disabled
I0524 17:44:16.283093       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
I0524 17:44:16.283369       1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
E0524 17:44:18.410541       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0524 17:44:18.410618       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0524 17:44:18.411874       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0524 17:44:18.414829       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0524 17:44:18.414876       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0524 17:44:18.426899       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0524 17:44:18.426920       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0524 17:44:18.427736       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0524 17:44:18.428376       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0524 17:44:18.428377       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0524 17:44:19.411860       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0524 17:44:19.412766       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0524 17:44:19.414768       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0524 17:44:19.415861       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0524 17:44:19.423611       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0524 17:44:19.428104       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0524 17:44:19.429304       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0524 17:44:19.430785       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0524 17:44:19.432002       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0524 17:44:19.432981       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
I0524 17:44:21.284704       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
I0524 17:44:21.384829       1 controller_utils.go:1034] Caches are synced for scheduler controller
I0524 17:44:21.384968       1 leaderelection.go:217] attempting to acquire leader lease  kube-system/kube-scheduler...
I0524 17:44:21.389935       1 leaderelection.go:227] successfully acquired lease kube-system/kube-scheduler

==> kubelet <==
-- Logs begin at Fri 2019-05-24 00:22:03 CDT, end at Fri 2019-05-24 12:48:49 CDT. --
May 24 12:44:33 fabsnuc.intel.com kubelet[26430]: E0524 12:44:33.120690   26430 pod_workers.go:190] Error syncing pod 94a8990f-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 10s restarting failed container=coredns pod=coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"
May 24 12:44:34 fabsnuc.intel.com kubelet[26430]: E0524 12:44:34.147984   26430 pod_workers.go:190] Error syncing pod 94a8990f-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 10s restarting failed container=coredns pod=coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"
May 24 12:44:34 fabsnuc.intel.com kubelet[26430]: E0524 12:44:34.159168   26430 pod_workers.go:190] Error syncing pod 94a7de57-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 10s restarting failed container=coredns pod=coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"
May 24 12:44:35 fabsnuc.intel.com kubelet[26430]: E0524 12:44:35.176560   26430 pod_workers.go:190] Error syncing pod 94a7de57-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 10s restarting failed container=coredns pod=coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"
May 24 12:44:36 fabsnuc.intel.com kubelet[26430]: E0524 12:44:36.125943   26430 pod_workers.go:190] Error syncing pod 94a8990f-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 10s restarting failed container=coredns pod=coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"
May 24 12:44:38 fabsnuc.intel.com kubelet[26430]: E0524 12:44:38.222372   26430 pod_workers.go:190] Error syncing pod 94a7de57-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 10s restarting failed container=coredns pod=coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"
May 24 12:44:52 fabsnuc.intel.com kubelet[26430]: E0524 12:44:52.325022   26430 pod_workers.go:190] Error syncing pod 94a7de57-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 20s restarting failed container=coredns pod=coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"
May 24 12:44:53 fabsnuc.intel.com kubelet[26430]: E0524 12:44:53.345496   26430 pod_workers.go:190] Error syncing pod 94a8990f-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 20s restarting failed container=coredns pod=coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"
May 24 12:44:56 fabsnuc.intel.com kubelet[26430]: E0524 12:44:56.126216   26430 pod_workers.go:190] Error syncing pod 94a8990f-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 20s restarting failed container=coredns pod=coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"
May 24 12:44:58 fabsnuc.intel.com kubelet[26430]: E0524 12:44:58.222897   26430 pod_workers.go:190] Error syncing pod 94a7de57-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 20s restarting failed container=coredns pod=coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"
May 24 12:45:09 fabsnuc.intel.com kubelet[26430]: E0524 12:45:09.417015   26430 pod_workers.go:190] Error syncing pod 94a8990f-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 20s restarting failed container=coredns pod=coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"
May 24 12:45:09 fabsnuc.intel.com kubelet[26430]: E0524 12:45:09.417037   26430 pod_workers.go:190] Error syncing pod 94a7de57-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 20s restarting failed container=coredns pod=coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"
May 24 12:45:22 fabsnuc.intel.com kubelet[26430]: E0524 12:45:22.754070   26430 pod_workers.go:190] Error syncing pod 94a8990f-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 40s restarting failed container=coredns pod=coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"
May 24 12:45:22 fabsnuc.intel.com kubelet[26430]: E0524 12:45:22.776061   26430 pod_workers.go:190] Error syncing pod 94a7de57-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 40s restarting failed container=coredns pod=coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"
May 24 12:45:23 fabsnuc.intel.com kubelet[26430]: E0524 12:45:23.792888   26430 pod_workers.go:190] Error syncing pod 94a8990f-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 40s restarting failed container=coredns pod=coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"
May 24 12:45:23 fabsnuc.intel.com kubelet[26430]: E0524 12:45:23.804715   26430 pod_workers.go:190] Error syncing pod 94a7de57-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 40s restarting failed container=coredns pod=coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"
May 24 12:45:26 fabsnuc.intel.com kubelet[26430]: E0524 12:45:26.126308   26430 pod_workers.go:190] Error syncing pod 94a8990f-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 40s restarting failed container=coredns pod=coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"
May 24 12:45:28 fabsnuc.intel.com kubelet[26430]: E0524 12:45:28.222872   26430 pod_workers.go:190] Error syncing pod 94a7de57-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 40s restarting failed container=coredns pod=coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"
May 24 12:45:38 fabsnuc.intel.com kubelet[26430]: E0524 12:45:38.417057   26430 pod_workers.go:190] Error syncing pod 94a8990f-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 40s restarting failed container=coredns pod=coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"
May 24 12:45:42 fabsnuc.intel.com kubelet[26430]: E0524 12:45:42.416826   26430 pod_workers.go:190] Error syncing pod 94a7de57-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 40s restarting failed container=coredns pod=coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"
May 24 12:45:51 fabsnuc.intel.com kubelet[26430]: E0524 12:45:51.416901   26430 pod_workers.go:190] Error syncing pod 94a8990f-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 40s restarting failed container=coredns pod=coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"
May 24 12:45:55 fabsnuc.intel.com kubelet[26430]: E0524 12:45:55.416863   26430 pod_workers.go:190] Error syncing pod 94a7de57-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 40s restarting failed container=coredns pod=coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"
May 24 12:46:06 fabsnuc.intel.com kubelet[26430]: E0524 12:46:06.134784   26430 pod_workers.go:190] Error syncing pod 94a8990f-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=coredns pod=coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"
May 24 12:46:08 fabsnuc.intel.com kubelet[26430]: E0524 12:46:08.174227   26430 pod_workers.go:190] Error syncing pod 94a7de57-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=coredns pod=coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"
May 24 12:46:09 fabsnuc.intel.com kubelet[26430]: E0524 12:46:09.187954   26430 pod_workers.go:190] Error syncing pod 94a7de57-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=coredns pod=coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"
May 24 12:46:16 fabsnuc.intel.com kubelet[26430]: E0524 12:46:16.126184   26430 pod_workers.go:190] Error syncing pod 94a8990f-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=coredns pod=coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"
May 24 12:46:22 fabsnuc.intel.com kubelet[26430]: E0524 12:46:22.417038   26430 pod_workers.go:190] Error syncing pod 94a7de57-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=coredns pod=coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"
May 24 12:46:31 fabsnuc.intel.com kubelet[26430]: E0524 12:46:31.416842   26430 pod_workers.go:190] Error syncing pod 94a8990f-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=coredns pod=coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"
May 24 12:46:34 fabsnuc.intel.com kubelet[26430]: E0524 12:46:34.418387   26430 pod_workers.go:190] Error syncing pod 94a7de57-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=coredns pod=coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"
May 24 12:46:42 fabsnuc.intel.com kubelet[26430]: E0524 12:46:42.416809   26430 pod_workers.go:190] Error syncing pod 94a8990f-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=coredns pod=coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"
May 24 12:46:46 fabsnuc.intel.com kubelet[26430]: E0524 12:46:46.416890   26430 pod_workers.go:190] Error syncing pod 94a7de57-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=coredns pod=coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"
May 24 12:46:53 fabsnuc.intel.com kubelet[26430]: E0524 12:46:53.416860   26430 pod_workers.go:190] Error syncing pod 94a8990f-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=coredns pod=coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"
May 24 12:47:01 fabsnuc.intel.com kubelet[26430]: E0524 12:47:01.417019   26430 pod_workers.go:190] Error syncing pod 94a7de57-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=coredns pod=coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"
May 24 12:47:04 fabsnuc.intel.com kubelet[26430]: E0524 12:47:04.417086   26430 pod_workers.go:190] Error syncing pod 94a8990f-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=coredns pod=coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"
May 24 12:47:13 fabsnuc.intel.com kubelet[26430]: E0524 12:47:13.416857   26430 pod_workers.go:190] Error syncing pod 94a7de57-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=coredns pod=coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"
May 24 12:47:18 fabsnuc.intel.com kubelet[26430]: E0524 12:47:18.416919   26430 pod_workers.go:190] Error syncing pod 94a8990f-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=coredns pod=coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"
May 24 12:47:27 fabsnuc.intel.com kubelet[26430]: E0524 12:47:27.416850   26430 pod_workers.go:190] Error syncing pod 94a7de57-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=coredns pod=coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"
May 24 12:47:31 fabsnuc.intel.com kubelet[26430]: E0524 12:47:31.759366   26430 pod_workers.go:190] Error syncing pod 94a8990f-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=coredns pod=coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"
May 24 12:47:36 fabsnuc.intel.com kubelet[26430]: E0524 12:47:36.126234   26430 pod_workers.go:190] Error syncing pod 94a8990f-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=coredns pod=coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"
May 24 12:47:43 fabsnuc.intel.com kubelet[26430]: E0524 12:47:43.868686   26430 pod_workers.go:190] Error syncing pod 94a7de57-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=coredns pod=coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"
May 24 12:47:48 fabsnuc.intel.com kubelet[26430]: E0524 12:47:48.222803   26430 pod_workers.go:190] Error syncing pod 94a7de57-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=coredns pod=coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"
May 24 12:47:48 fabsnuc.intel.com kubelet[26430]: E0524 12:47:48.416945   26430 pod_workers.go:190] Error syncing pod 94a8990f-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=coredns pod=coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"
May 24 12:47:59 fabsnuc.intel.com kubelet[26430]: E0524 12:47:59.416959   26430 pod_workers.go:190] Error syncing pod 94a7de57-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=coredns pod=coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"
May 24 12:48:01 fabsnuc.intel.com kubelet[26430]: E0524 12:48:01.416976   26430 pod_workers.go:190] Error syncing pod 94a8990f-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=coredns pod=coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"
May 24 12:48:11 fabsnuc.intel.com kubelet[26430]: E0524 12:48:11.417023   26430 pod_workers.go:190] Error syncing pod 94a7de57-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=coredns pod=coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"
May 24 12:48:15 fabsnuc.intel.com kubelet[26430]: E0524 12:48:15.416908   26430 pod_workers.go:190] Error syncing pod 94a8990f-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=coredns pod=coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"
May 24 12:48:26 fabsnuc.intel.com kubelet[26430]: E0524 12:48:26.416923   26430 pod_workers.go:190] Error syncing pod 94a7de57-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=coredns pod=coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"
May 24 12:48:27 fabsnuc.intel.com kubelet[26430]: E0524 12:48:27.416851   26430 pod_workers.go:190] Error syncing pod 94a8990f-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=coredns pod=coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"
May 24 12:48:37 fabsnuc.intel.com kubelet[26430]: E0524 12:48:37.416896   26430 pod_workers.go:190] Error syncing pod 94a7de57-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=coredns pod=coredns-fb8b8dccf-tn8vz_kube-system(94a7de57-7e4b-11e9-82df-54b20311c753)"
May 24 12:48:40 fabsnuc.intel.com kubelet[26430]: E0524 12:48:40.416996   26430 pod_workers.go:190] Error syncing pod 94a8990f-7e4b-11e9-82df-54b20311c753 ("coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=coredns pod=coredns-fb8b8dccf-z28dc_kube-system(94a8990f-7e4b-11e9-82df-54b20311c753)"

==> storage-provisioner <==

The operating system version:

[root@fabsnuc ~]# cat /etc/os-release 
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

[root@fabsnuc ~]# docker version
Client:
 Version:           18.09.6
 API version:       1.39
 Go version:        go1.10.8
 Git commit:        481bc77156
 Built:             Sat May  4 02:34:58 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.6
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.8
  Git commit:       481bc77
  Built:            Sat May  4 02:02:43 2019
  OS/Arch:          linux/amd64
  Experimental:     false
[root@fabsnuc ~]# 
[root@fabsnuc ~]# getenforce 
Disabled
[root@fabsnuc ~]# 
@fabstao
Copy link
Author

fabstao commented May 24, 2019

[root@fabsnuc ~]# kubectl logs deployment/coredns -n kube-system --previous
Found 2 pods, using pod/coredns-fb8b8dccf-5tpm6
E0524 18:06:42.120194       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: no route to host
E0524 18:06:42.120194       1 reflector.go:134] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: no route to host
log: exiting because of error: log: cannot create log: open /tmp/coredns.coredns-fb8b8dccf-5tpm6.unknownuser.log.ERROR.20190524-180642.1: no such file or directory

@tstromberg
Copy link
Contributor

This message isn't normal either:

dial tcp 10.96.0.1:443: connect: no route to host

Some other folks have similar coredns failures outside of minikube when the apiserver isn't available: kubernetes/kubernetes#75414

Why wouldn't the apiserver be available though? Here's one possible hint from kube-proxy:

W0524 17:44:30.645444 1 proxier.go:498] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules

This may be a red herring, but do you mind seeing what sudo modprobe ip_vs_rr outputs?

@tstromberg tstromberg added addon/efk Issues with EFK addon co/none-driver labels May 24, 2019
@tstromberg tstromberg changed the title coredns pods CrashLoopBackOff, already tried #3511 none: coredns pods CrashLoopBackOff: dial tcp 10.96.0.1:443: connect: no route to host May 24, 2019
@tstromberg tstromberg changed the title none: coredns pods CrashLoopBackOff: dial tcp 10.96.0.1:443: connect: no route to host none: coredns CrashLoopBackOff: dial tcp ip:443: connect: no route to host May 24, 2019
@tstromberg tstromberg added co/coredns CoreDNS related issues ev/CrashLoopBackOff Crash Loop Backoff events and removed addon/efk Issues with EFK addon labels May 24, 2019
@albinsuresh
Copy link

albinsuresh commented May 27, 2019

@fabstao I was facing the exact same issue on my CentOS VM. I got it fixed by following the instructions in this comment: kubernetes/kubeadm#193 (comment) to flush the iptables

@tstromberg
Copy link
Contributor

Good to know. We should update the error message for this failure to mention flushing iptables then. Thanks!

@tstromberg tstromberg added cause/firewall-or-proxy When firewalls or proxies seem to be interfering needs-solution-message Issues where where offering a solution for an error would be helpful priority/backlog Higher priority than priority/awaiting-more-evidence. labels May 29, 2019
@HattabbI4
Copy link

@fabstao I was facing the exact same issue on my CentOS VM. I got it fixed by following the instructions in this comment: kubernetes/kubeadm#193 (comment) to flush the iptables

It's not solved this problem...

@tstromberg tstromberg added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Jul 18, 2019
@HanYL-coder
Copy link

I got exactly the same issue too and was solved by @albinsuresh 's reply.
thx a lot.

@slalomnut
Copy link

Unfortunately, the fix in @albinsuresh 's reply is a work-around. Does anyone know what he true fix is if you're running a customized local firewall? I'll do some digging and post again if I find it.

@medyagh
Copy link
Member

medyagh commented Aug 20, 2019

@slalomnut could you please provide logs from the newest minikube version ?

both minikube logs and start output and also the kubectl get pods -o wide -n kube-system

and also

kubectl describe pod coredns -n kube-system

in the latest version we provide a better logging

@medyagh
Copy link
Member

medyagh commented Aug 20, 2019

and I wonder has anyone checked to see if this comment helps them ? (if the issue still exists with 1.3.1) kubernetes/kubeadm#193 (comment)

@BrotherPatrix
Copy link

I can confirm that it was a firewall issue on my side.
I was running kubernetes 1.15.3 on my local machine(Ubuntu 18.04.3), and I had ufw enabled, and because of that, it was unable to communicate with 10.96.0.1:443, and after I disabled ufw, coredns pods were up and running.

@tstromberg tstromberg added kind/support Categorizes issue or PR as a support question. and removed ev/CrashLoopBackOff Crash Loop Backoff events needs-solution-message Issues where where offering a solution for an error would be helpful priority/backlog Higher priority than priority/awaiting-more-evidence. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. labels Sep 20, 2019
@tstromberg
Copy link
Contributor

This seems solved, but I will leave it open for anyone else who runs into this.

@JokerDevops
Copy link

use command systemctl stop firewalld
I've worked out the problem.

@harpratap
Copy link

harpratap commented Oct 30, 2019

Upgraded to latest v1.5.1 and seeing same issue but because of a different error now - /etc/coredns/Corefile:4 - Error during parsing: Unknown directive 'ready'
Using none driver on Ubuntu 18.04.3

Happening only on the v1.4.0 and above, when I switch back to v1.3.1 and use --extra-config=kubelet.resolv-conf=/run/systemd/resolve/resolv.conf it works fine

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 29, 2020
@epicavic
Copy link

epicavic commented Feb 11, 2020

In my case it was an issue with dashboard.
💣 http://127.0.0.1:37577/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ is not accessible: Temporary Error: unexpected response code: 503

if you have firewalld enabled you can add docker0 bridge interface to trusted zone which should allow docker containers to host communication

$ sudo minikube start --vm-driver=none
$ sudo firewall-cmd --permanent --zone=trusted --add-interface=docker0
$ sudo firewall-cmd --reload
$ sudo firewall-cmd --get-active-zones
$ sudo firewall-cmd --list-all --zone=trusted
$ sudo chown -R $USER $HOME/.kube $HOME/.minikube
$ minikube dashboard &
$ minikube version 
minikube version: v1.7.2
commit: 50d543b5fcb0e1c0d7c27b1398a9a9790df09dfb

$ minikube status
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

$ sudo firewall-cmd --state
running

$ sudo firewall-cmd --get-active-zones
public
  interfaces: wlp1s0
trusted
  interfaces: docker0

$ sudo firewall-cmd --list-all --zone=trusted
trusted (active)
  target: ACCEPT
  icmp-block-inversion: no
  interfaces: docker0
  sources: 
  services: 
  ports: 
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 
	
$ kubectl get pods --all-namespaces
NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE
kube-system            coredns-6955765f44-7zjth                     1/1     Running   0          67m
kube-system            coredns-6955765f44-b9gbq                     1/1     Running   0          67m
kube-system            etcd-venga                                   1/1     Running   0          67m
kube-system            kube-apiserver-venga                         1/1     Running   0          67m
kube-system            kube-controller-manager-venga                1/1     Running   0          67m
kube-system            kube-proxy-7xv6h                             1/1     Running   0          67m
kube-system            kube-scheduler-venga                         1/1     Running   0          67m
kube-system            storage-provisioner                          1/1     Running   0          67m
kubernetes-dashboard   dashboard-metrics-scraper-7b64584c5c-nw82r   1/1     Running   0          65m
kubernetes-dashboard   kubernetes-dashboard-79d9cd965-zlbl2         1/1     Running   13         65m

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 12, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@zioalex
Copy link

zioalex commented Sep 20, 2020

Just adding my experience. I had the same problem. In my case it has been enough to add the Masquerading option in the default host outbound interface and then the communication started to work.

@chengfq5
Copy link

Just adding my experience. I had the same problem. In my case it has been enough to add the Masquerading option in the default host outbound interface and then the communication started to work.

I have this problem in prod env after runing some days, server in pod can not access outer network , exececute blow commands reloved, i want to know how happened before encounter this problem , and how to prevent this ;
systemctl stop kubelet
systemctl stop docker
iptables --flush
iptables -tnat --flush
systemctl start kubelet
systemctl start docker

@RajkumarShivage
Copy link

I was facing same problem
Failed to list *v1.Namespace: Get "https://10.100.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.100.0.1:443: connect: no route to host

One of the worker nodes had firewalld running, stopped it. It resolved the issue.
Thanks all for your inputs!

@sudhakarreddyambati
Copy link

how to fix permanently without disabling firewalld or any workaround?

@tacerus
Copy link

tacerus commented Apr 11, 2021

How can flushing or disabling the firewall be an accepted solution - this is disastrous. Please provide details on which firewall ports need to be opened and if any Kubernetes related interfaces (docker, flannel, ..) need to be assigned specific zones in order for CoreDNS to be able to connect to the API.

kubectl get svc kubernetes
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   2d7h
E0411 01:24:38.397089       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
E0411 01:24:44.477345       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
[INFO] plugin/ready: Still waiting on: "kubernetes"

@tacerus
Copy link

tacerus commented Apr 11, 2021

/reopen

@k8s-ci-robot
Copy link
Contributor

@tacerus: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@danpawlik
Copy link

danpawlik commented Sep 29, 2023

Try:

Yes, after 6 years, that issue from time to time it just appears.

@kennyhyun
Copy link

kennyhyun commented Oct 9, 2023

I had this on Ubuntu aarch64 5.15.0-1045-oracle
with docker preinstalled

I did microk8s inspect and done suggested command but had no luck

it was

WARNING:  IPtables FORWARD policy is DROP. Consider enabling traffic forwarding with: sudo iptables -P FORWARD ACCEPT
The change can be made persistent with: sudo apt-get install iptables-persistent
WARNING:  Docker is installed.
File "/etc/docker/daemon.json" does not exist.
You should create it and add the following lines:
{
    "insecure-registries" : ["localhost:32000"]
}
and then restart docker with: sudo systemctl restart docker

and also checked https://microk8s.io/docs/troubleshooting#common-issues

and did similar like sudo apt install linux-modules-extra-5.15.0-1045-oracle still no luck.

But after I did similar to #4350 (comment),
it was fixed

  • microk8s stop
  • systemctl stop docker
  • iptables --flush
  • iptables -tnat --flush
  • microk8s start
  • systemctl start docker

I think I had some issue with iptables but not sure about that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cause/firewall-or-proxy When firewalls or proxies seem to be interfering co/coredns CoreDNS related issues co/none-driver kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests