New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fail to get rootfs information unable to find data for container / #58217

Open
duffqiu opened this Issue Jan 12, 2018 · 27 comments

Comments

@duffqiu

duffqiu commented Jan 12, 2018

Hi all,

I use systemd to start kubelet,
but log show these error:

Jan 12 15:34:42 centos3 kubelet[6623]: E0112 15:34:42.401593    6623 container_manager_linux.go:583] [ContainerManager]: Fail to get rootfs information unable to find data for container /

but I still can create the pod successfully.
anyone can help?

the version is 1.9

/sig Apps

@shenshouer

This comment has been minimized.

Show comment
Hide comment
@shenshouer

shenshouer Jan 15, 2018

error log from kubelet v1.9.1:

Jan 15 12:36:53 l23-27-101 kubelet[7335]: E0115 12:36:53.885011    7335 container_manager_linux.go:583] [ContainerManager]: Fail to get rootfs information unable to find data for container /
Jan 15 12:36:54 l23-27-101 kubelet[7335]: E0115 12:36:54.885214    7335 container_manager_linux.go:583] [ContainerManager]: Fail to get rootfs information unable to find data for container /

shenshouer commented Jan 15, 2018

error log from kubelet v1.9.1:

Jan 15 12:36:53 l23-27-101 kubelet[7335]: E0115 12:36:53.885011    7335 container_manager_linux.go:583] [ContainerManager]: Fail to get rootfs information unable to find data for container /
Jan 15 12:36:54 l23-27-101 kubelet[7335]: E0115 12:36:54.885214    7335 container_manager_linux.go:583] [ContainerManager]: Fail to get rootfs information unable to find data for container /
@devenfan

This comment has been minimized.

Show comment
Hide comment
@devenfan

devenfan Jan 28, 2018

the same error from kubelet 1.9.1:

root@ubuntu-64:/opt/bin# systemctl status kubelet
● kubelet.service - Kubernetes Kubelet Server
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
   Active: active (running) since Sun 2018-01-28 10:01:55 CST; 17min ago
     Docs: https://github.com/kubernetes
 Main PID: 20940 (kubelet)
    Tasks: 21
   Memory: 71.0M
      CPU: 58.328s
   CGroup: /system.slice/kubelet.service
           └─20940 /opt/bin/kubelet --logtostderr=false --v=2 --log-dir=/var/log/kubernetes --address=0.0.0.0 --hostname-override=ubuntu-64 --allow-privileged=true --cgroup-driver=systemd --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cl

Jan 28 10:01:55 ubuntu-64 systemd[1]: Started Kubernetes Kubelet Server.
Jan 28 10:02:00 ubuntu-64 kubelet[20940]: E0128 10:02:00.996798   20940 kubelet.go:1275] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data for container /
Jan 28 10:02:01 ubuntu-64 kubelet[20940]: E0128 10:02:01.060356   20940 container_manager_linux.go:583] [ContainerManager]: Fail to get rootfs information unable to find data for container /
Jan 28 10:02:02 ubuntu-64 kubelet[20940]: E0128 10:02:02.060852   20940 container_manager_linux.go:583] [ContainerManager]: Fail to get rootfs information unable to find data for container /
Jan 28 10:02:03 ubuntu-64 kubelet[20940]: E0128 10:02:03.061452   20940 container_manager_linux.go:583] [ContainerManager]: Fail to get rootfs information unable to find data for container /

devenfan commented Jan 28, 2018

the same error from kubelet 1.9.1:

root@ubuntu-64:/opt/bin# systemctl status kubelet
● kubelet.service - Kubernetes Kubelet Server
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
   Active: active (running) since Sun 2018-01-28 10:01:55 CST; 17min ago
     Docs: https://github.com/kubernetes
 Main PID: 20940 (kubelet)
    Tasks: 21
   Memory: 71.0M
      CPU: 58.328s
   CGroup: /system.slice/kubelet.service
           └─20940 /opt/bin/kubelet --logtostderr=false --v=2 --log-dir=/var/log/kubernetes --address=0.0.0.0 --hostname-override=ubuntu-64 --allow-privileged=true --cgroup-driver=systemd --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cl

Jan 28 10:01:55 ubuntu-64 systemd[1]: Started Kubernetes Kubelet Server.
Jan 28 10:02:00 ubuntu-64 kubelet[20940]: E0128 10:02:00.996798   20940 kubelet.go:1275] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data for container /
Jan 28 10:02:01 ubuntu-64 kubelet[20940]: E0128 10:02:01.060356   20940 container_manager_linux.go:583] [ContainerManager]: Fail to get rootfs information unable to find data for container /
Jan 28 10:02:02 ubuntu-64 kubelet[20940]: E0128 10:02:02.060852   20940 container_manager_linux.go:583] [ContainerManager]: Fail to get rootfs information unable to find data for container /
Jan 28 10:02:03 ubuntu-64 kubelet[20940]: E0128 10:02:03.061452   20940 container_manager_linux.go:583] [ContainerManager]: Fail to get rootfs information unable to find data for container /

@etabest

This comment has been minimized.

Show comment
Hide comment
@etabest

etabest Jan 30, 2018

I also meet this error.
environment:
os -- ubuntu server 16.04
kubelet -- v 1.9.2
error info:
Jan 30 04:04:44 node-1 kubelet[9879]: E0130 04:04:44.861884 9879 container_manager_linux.go:583] [ContainerManager]: Fail to get rootfs information unable to find data for container /
Jan 30 04:04:45 node-1 kubelet[9879]: E0130 04:04:45.864870 9879 container_manager_linux.go:583] [ContainerManager]: Fail to get rootfs information unable to find data for container /

etabest commented Jan 30, 2018

I also meet this error.
environment:
os -- ubuntu server 16.04
kubelet -- v 1.9.2
error info:
Jan 30 04:04:44 node-1 kubelet[9879]: E0130 04:04:44.861884 9879 container_manager_linux.go:583] [ContainerManager]: Fail to get rootfs information unable to find data for container /
Jan 30 04:04:45 node-1 kubelet[9879]: E0130 04:04:45.864870 9879 container_manager_linux.go:583] [ContainerManager]: Fail to get rootfs information unable to find data for container /

@voor

This comment has been minimized.

Show comment
Hide comment
@voor

voor Feb 3, 2018

Same issue on Fedora 27:

Linux HOSTNAME_REMOVED 4.14.14-300.fc27.x86_64 #1 SMP Fri Jan 19 13:19:54 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1", GitCommit:"3a1c9449a956b6026f075fa3134ff92f7d55f812", GitTreeState:"archive", BuildDate:"2018-01-15T15:56:33Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1", GitCommit:"3a1c9449a956b6026f075fa3134ff92f7d55f812", GitTreeState:"archive", BuildDate:"2018-01-15T15:56:33Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
[root@x2 ~]# systemctl status kubelet
● kubelet.service - Kubernetes Kubelet Server
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: active (running) since Sat 2018-02-03 09:51:57 EST; 4s ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 32508 (kubelet)
    Tasks: 14 (limit: 4915)
   Memory: 34.6M
      CPU: 993ms
   CGroup: /system.slice/kubelet.service
           └─32508 /usr/bin/kubelet --logtostderr=true --v=0 --address=127.0.0.1 --hostname-override=127.0.0.1 --allow-privileged=false --cgroup-driver=systemd --kubeconfig=/var/lib/kubelet/kubeconfig

Feb 03 09:51:57 x2 kubelet[32508]: I0203 09:51:57.755379   32508 kubelet.go:1778] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
Feb 03 09:51:57 x2 kubelet[32508]: I0203 09:51:57.756393   32508 volume_manager.go:247] Starting Kubelet Volume Manager
Feb 03 09:51:57 x2 kubelet[32508]: E0203 09:51:57.757568   32508 container_manager_linux.go:583] [ContainerManager]: Fail to get rootfs information unable to find data for container /
Feb 03 09:51:57 x2 kubelet[32508]: I0203 09:51:57.856510   32508 kubelet_node_status.go:273] Setting node annotation to enable volume controller attach/detach
Feb 03 09:51:57 x2 kubelet[32508]: I0203 09:51:57.859395   32508 kubelet_node_status.go:82] Attempting to register node 127.0.0.1
Feb 03 09:51:57 x2 kubelet[32508]: I0203 09:51:57.866008   32508 kubelet_node_status.go:127] Node 127.0.0.1 was previously registered
Feb 03 09:51:57 x2 kubelet[32508]: I0203 09:51:57.866030   32508 kubelet_node_status.go:85] Successfully registered node 127.0.0.1
Feb 03 09:51:57 x2 kubelet[32508]: I0203 09:51:57.869931   32508 kubelet_node_status.go:792] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2018-02-03 09:51:57.869916367 -0500 EST m=+0.529801371 LastTransitionTime:2018-02-03 09:51:57.869916367 -050
Feb 03 09:51:58 x2 kubelet[32508]: E0203 09:51:58.757957   32508 container_manager_linux.go:583] [ContainerManager]: Fail to get rootfs information unable to find data for container /
Feb 03 09:51:59 x2 kubelet[32508]: E0203 09:51:59.758345   32508 container_manager_linux.go:583] [ContainerManager]: Fail to get rootfs information unable to find data for container /

voor commented Feb 3, 2018

Same issue on Fedora 27:

Linux HOSTNAME_REMOVED 4.14.14-300.fc27.x86_64 #1 SMP Fri Jan 19 13:19:54 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1", GitCommit:"3a1c9449a956b6026f075fa3134ff92f7d55f812", GitTreeState:"archive", BuildDate:"2018-01-15T15:56:33Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1", GitCommit:"3a1c9449a956b6026f075fa3134ff92f7d55f812", GitTreeState:"archive", BuildDate:"2018-01-15T15:56:33Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
[root@x2 ~]# systemctl status kubelet
● kubelet.service - Kubernetes Kubelet Server
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: active (running) since Sat 2018-02-03 09:51:57 EST; 4s ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 32508 (kubelet)
    Tasks: 14 (limit: 4915)
   Memory: 34.6M
      CPU: 993ms
   CGroup: /system.slice/kubelet.service
           └─32508 /usr/bin/kubelet --logtostderr=true --v=0 --address=127.0.0.1 --hostname-override=127.0.0.1 --allow-privileged=false --cgroup-driver=systemd --kubeconfig=/var/lib/kubelet/kubeconfig

Feb 03 09:51:57 x2 kubelet[32508]: I0203 09:51:57.755379   32508 kubelet.go:1778] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
Feb 03 09:51:57 x2 kubelet[32508]: I0203 09:51:57.756393   32508 volume_manager.go:247] Starting Kubelet Volume Manager
Feb 03 09:51:57 x2 kubelet[32508]: E0203 09:51:57.757568   32508 container_manager_linux.go:583] [ContainerManager]: Fail to get rootfs information unable to find data for container /
Feb 03 09:51:57 x2 kubelet[32508]: I0203 09:51:57.856510   32508 kubelet_node_status.go:273] Setting node annotation to enable volume controller attach/detach
Feb 03 09:51:57 x2 kubelet[32508]: I0203 09:51:57.859395   32508 kubelet_node_status.go:82] Attempting to register node 127.0.0.1
Feb 03 09:51:57 x2 kubelet[32508]: I0203 09:51:57.866008   32508 kubelet_node_status.go:127] Node 127.0.0.1 was previously registered
Feb 03 09:51:57 x2 kubelet[32508]: I0203 09:51:57.866030   32508 kubelet_node_status.go:85] Successfully registered node 127.0.0.1
Feb 03 09:51:57 x2 kubelet[32508]: I0203 09:51:57.869931   32508 kubelet_node_status.go:792] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2018-02-03 09:51:57.869916367 -0500 EST m=+0.529801371 LastTransitionTime:2018-02-03 09:51:57.869916367 -050
Feb 03 09:51:58 x2 kubelet[32508]: E0203 09:51:58.757957   32508 container_manager_linux.go:583] [ContainerManager]: Fail to get rootfs information unable to find data for container /
Feb 03 09:51:59 x2 kubelet[32508]: E0203 09:51:59.758345   32508 container_manager_linux.go:583] [ContainerManager]: Fail to get rootfs information unable to find data for container /
@voor

This comment has been minimized.

Show comment
Hide comment
@voor

voor Feb 3, 2018

Some additional information:

[root@x2 ~]# curl http://127.0.0.1:4194/validate/
cAdvisor version: 

OS version: Fedora 27 (Workstation Edition)

Kernel version: [Supported and recommended]
	Kernel version is 4.14.14-300.fc27.x86_64. Versions >= 2.6 are supported. 3.0+ are recommended.


Cgroup setup: [Supported and recommended]
	Available cgroups: map[blkio:1 memory:1 devices:1 freezer:1 net_cls:1 net_prio:1 hugetlb:1 cpuset:1 cpu:1 cpuacct:1 perf_event:1 pids:1]
	Following cgroups are required: [cpu cpuacct]
	Following other cgroups are recommended: [memory blkio cpuset devices freezer]
	Hierarchical memory accounting enabled. Reported memory usage includes memory used by child containers.


Cgroup mount setup: [Supported and recommended]
	Cgroups are mounted at /sys/fs/cgroup.
	Cgroup mount directories: blkio cpu cpu,cpuacct cpuacct cpuset devices freezer hugetlb memory net_cls net_cls,net_prio net_prio perf_event pids systemd unified 
	Any cgroup mount point that is detectible and accessible is supported. /sys/fs/cgroup is recommended as a standard location.
	Cgroup mounts:
	cgroup /sys/fs/cgroup/systemd cgroup rw,seclabel,nosuid,nodev,noexec,relatime,xattr,name=systemd 0 0
	cgroup /sys/fs/cgroup/memory cgroup rw,seclabel,nosuid,nodev,noexec,relatime,memory 0 0
	cgroup /sys/fs/cgroup/pids cgroup rw,seclabel,nosuid,nodev,noexec,relatime,pids 0 0
	cgroup /sys/fs/cgroup/hugetlb cgroup rw,seclabel,nosuid,nodev,noexec,relatime,hugetlb 0 0
	cgroup /sys/fs/cgroup/cpuset cgroup rw,seclabel,nosuid,nodev,noexec,relatime,cpuset 0 0
	cgroup /sys/fs/cgroup/blkio cgroup rw,seclabel,nosuid,nodev,noexec,relatime,blkio 0 0
	cgroup /sys/fs/cgroup/perf_event cgroup rw,seclabel,nosuid,nodev,noexec,relatime,perf_event 0 0
	cgroup /sys/fs/cgroup/net_cls,net_prio cgroup rw,seclabel,nosuid,nodev,noexec,relatime,net_cls,net_prio 0 0
	cgroup /sys/fs/cgroup/devices cgroup rw,seclabel,nosuid,nodev,noexec,relatime,devices 0 0
	cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,seclabel,nosuid,nodev,noexec,relatime,cpu,cpuacct 0 0
	cgroup /sys/fs/cgroup/freezer cgroup rw,seclabel,nosuid,nodev,noexec,relatime,freezer 0 0


Docker version: [Supported and recommended]
	Docker version is 1.13.1. Versions >= 1.0 are supported. 1.2+ are recommended.


Docker driver setup: [Supported and recommended]
	Storage driver is overlay2.


Block device setup: [Supported and recommended]
	At least one device supports 'cfq' I/O scheduler. Some disk stats can be reported.
	 Disk "sda" Scheduler type "cfq".
	 Disk "dm-0" Scheduler type "none".
	 Disk "dm-1" Scheduler type "none".
	 Disk "dm-2" Scheduler type "none".
	 Disk "dm-3" Scheduler type "none".


Inotify watches: 
	

Managed containers: 
	/system.slice/akmods.service
	/system.slice/livesys.service
	/system.slice/systemd-logind.service
	/system.slice/netcf-transaction.service
	/system.slice/systemd-udevd.service
	/system.slice/polkit.service
	/kubepods.slice/kubepods-burstable.slice
	/system.slice/rtkit-daemon.service
	/system.slice/atd.service
	/system.slice/abrt-xorg.service
	/kube-proxy
	/
	/system.slice/kube-scheduler.service
	/system.slice/chronyd.service
	/system.slice/lvm2-lvmetad.service
	/system.slice/wpa_supplicant.service
	/system.slice/systemd-tmpfiles-setup.service
	/system.slice/systemd-random-seed.service
	/user.slice/user-0.slice/session-c13.scope
	/system.slice/rhel-push-plugin.service
	/system.slice/geoclue.service
	/system.slice/gdm.service
	/user.slice/user-1000.slice
	/kubepods.slice
	/system.slice/system-systemd\x2dcryptsetup.slice
	/system.slice/fedora-import-state.service
	/system.slice/docker.service
	/system.slice/sssd.service
	/system.slice/dracut-shutdown.service
	/system.slice/cups.service
	/user.slice/user-42.slice
	/system.slice/kube-proxy.service
	/system.slice/accounts-daemon.service
	/system.slice/mariadb.service
	/system.slice/kube-apiserver.service
	/system.slice/systemd-update-utmp.service
	/system.slice/rngd.service
	/system.slice/abrt-journal-core.service
	/system.slice/iscsi-shutdown.service
	/system.slice/system-lvm2\x2dpvscan.slice
	/system.slice/system-systemd\x2dfsck.slice
	/user.slice
	/system.slice/systemd-journal-flush.service
	/system.slice/iio-sensor-proxy.service
	/system.slice/udisks2.service
	/system.slice/gssproxy.service
	/system.slice/fedora-readonly.service
	/system.slice/systemd-user-sessions.service
	/system.slice/docker-containerd.service
	/system.slice/upower.service
	/system.slice/systemd-tmpfiles-setup-dev.service
	/system.slice/ModemManager.service
	/system.slice/etcd.service
	/system.slice/bluetooth.service
	/system.slice/avahi-daemon.service
	/system.slice/abrt-oops.service
	/system.slice
	/system.slice/systemd-modules-load.service
	/system.slice/NetworkManager.service
	/system.slice/systemd-udev-trigger.service
	/system.slice/system-getty.slice
	/system.slice/systemd-fsck-root.service
	/system.slice/libvirtd.service
	/system.slice/colord.service
	/system.slice/crond.service
	/system.slice/systemd-journald.service
	/system.slice/lvm2-monitor.service
	/system.slice/NetworkManager-wait-online.service
	/kubepods.slice/kubepods-besteffort.slice
	/system.slice/kmod-static-nodes.service
	/system.slice/alsa-state.service
	/init.scope
	/user.slice/user-0.slice
	/system.slice/rpc-statd-notify.service
	/system.slice/livesys-late.service
	/system.slice/system-systemd\x2dbacklight.slice
	/system.slice/mcelog.service
	/user.slice/user-0.slice/user@0.service
	/system.slice/systemd-sysctl.service
	/system.slice/kube-controller-manager.service
	/system.slice/dbus.service
	/system.slice/auditd.service
	/system.slice/abrtd.service
	/system.slice/firewalld.service
	/system.slice/kubelet.service
	/system.slice/systemd-udev-settle.service
	/system.slice/systemd-remount-fs.service

voor commented Feb 3, 2018

Some additional information:

[root@x2 ~]# curl http://127.0.0.1:4194/validate/
cAdvisor version: 

OS version: Fedora 27 (Workstation Edition)

Kernel version: [Supported and recommended]
	Kernel version is 4.14.14-300.fc27.x86_64. Versions >= 2.6 are supported. 3.0+ are recommended.


Cgroup setup: [Supported and recommended]
	Available cgroups: map[blkio:1 memory:1 devices:1 freezer:1 net_cls:1 net_prio:1 hugetlb:1 cpuset:1 cpu:1 cpuacct:1 perf_event:1 pids:1]
	Following cgroups are required: [cpu cpuacct]
	Following other cgroups are recommended: [memory blkio cpuset devices freezer]
	Hierarchical memory accounting enabled. Reported memory usage includes memory used by child containers.


Cgroup mount setup: [Supported and recommended]
	Cgroups are mounted at /sys/fs/cgroup.
	Cgroup mount directories: blkio cpu cpu,cpuacct cpuacct cpuset devices freezer hugetlb memory net_cls net_cls,net_prio net_prio perf_event pids systemd unified 
	Any cgroup mount point that is detectible and accessible is supported. /sys/fs/cgroup is recommended as a standard location.
	Cgroup mounts:
	cgroup /sys/fs/cgroup/systemd cgroup rw,seclabel,nosuid,nodev,noexec,relatime,xattr,name=systemd 0 0
	cgroup /sys/fs/cgroup/memory cgroup rw,seclabel,nosuid,nodev,noexec,relatime,memory 0 0
	cgroup /sys/fs/cgroup/pids cgroup rw,seclabel,nosuid,nodev,noexec,relatime,pids 0 0
	cgroup /sys/fs/cgroup/hugetlb cgroup rw,seclabel,nosuid,nodev,noexec,relatime,hugetlb 0 0
	cgroup /sys/fs/cgroup/cpuset cgroup rw,seclabel,nosuid,nodev,noexec,relatime,cpuset 0 0
	cgroup /sys/fs/cgroup/blkio cgroup rw,seclabel,nosuid,nodev,noexec,relatime,blkio 0 0
	cgroup /sys/fs/cgroup/perf_event cgroup rw,seclabel,nosuid,nodev,noexec,relatime,perf_event 0 0
	cgroup /sys/fs/cgroup/net_cls,net_prio cgroup rw,seclabel,nosuid,nodev,noexec,relatime,net_cls,net_prio 0 0
	cgroup /sys/fs/cgroup/devices cgroup rw,seclabel,nosuid,nodev,noexec,relatime,devices 0 0
	cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,seclabel,nosuid,nodev,noexec,relatime,cpu,cpuacct 0 0
	cgroup /sys/fs/cgroup/freezer cgroup rw,seclabel,nosuid,nodev,noexec,relatime,freezer 0 0


Docker version: [Supported and recommended]
	Docker version is 1.13.1. Versions >= 1.0 are supported. 1.2+ are recommended.


Docker driver setup: [Supported and recommended]
	Storage driver is overlay2.


Block device setup: [Supported and recommended]
	At least one device supports 'cfq' I/O scheduler. Some disk stats can be reported.
	 Disk "sda" Scheduler type "cfq".
	 Disk "dm-0" Scheduler type "none".
	 Disk "dm-1" Scheduler type "none".
	 Disk "dm-2" Scheduler type "none".
	 Disk "dm-3" Scheduler type "none".


Inotify watches: 
	

Managed containers: 
	/system.slice/akmods.service
	/system.slice/livesys.service
	/system.slice/systemd-logind.service
	/system.slice/netcf-transaction.service
	/system.slice/systemd-udevd.service
	/system.slice/polkit.service
	/kubepods.slice/kubepods-burstable.slice
	/system.slice/rtkit-daemon.service
	/system.slice/atd.service
	/system.slice/abrt-xorg.service
	/kube-proxy
	/
	/system.slice/kube-scheduler.service
	/system.slice/chronyd.service
	/system.slice/lvm2-lvmetad.service
	/system.slice/wpa_supplicant.service
	/system.slice/systemd-tmpfiles-setup.service
	/system.slice/systemd-random-seed.service
	/user.slice/user-0.slice/session-c13.scope
	/system.slice/rhel-push-plugin.service
	/system.slice/geoclue.service
	/system.slice/gdm.service
	/user.slice/user-1000.slice
	/kubepods.slice
	/system.slice/system-systemd\x2dcryptsetup.slice
	/system.slice/fedora-import-state.service
	/system.slice/docker.service
	/system.slice/sssd.service
	/system.slice/dracut-shutdown.service
	/system.slice/cups.service
	/user.slice/user-42.slice
	/system.slice/kube-proxy.service
	/system.slice/accounts-daemon.service
	/system.slice/mariadb.service
	/system.slice/kube-apiserver.service
	/system.slice/systemd-update-utmp.service
	/system.slice/rngd.service
	/system.slice/abrt-journal-core.service
	/system.slice/iscsi-shutdown.service
	/system.slice/system-lvm2\x2dpvscan.slice
	/system.slice/system-systemd\x2dfsck.slice
	/user.slice
	/system.slice/systemd-journal-flush.service
	/system.slice/iio-sensor-proxy.service
	/system.slice/udisks2.service
	/system.slice/gssproxy.service
	/system.slice/fedora-readonly.service
	/system.slice/systemd-user-sessions.service
	/system.slice/docker-containerd.service
	/system.slice/upower.service
	/system.slice/systemd-tmpfiles-setup-dev.service
	/system.slice/ModemManager.service
	/system.slice/etcd.service
	/system.slice/bluetooth.service
	/system.slice/avahi-daemon.service
	/system.slice/abrt-oops.service
	/system.slice
	/system.slice/systemd-modules-load.service
	/system.slice/NetworkManager.service
	/system.slice/systemd-udev-trigger.service
	/system.slice/system-getty.slice
	/system.slice/systemd-fsck-root.service
	/system.slice/libvirtd.service
	/system.slice/colord.service
	/system.slice/crond.service
	/system.slice/systemd-journald.service
	/system.slice/lvm2-monitor.service
	/system.slice/NetworkManager-wait-online.service
	/kubepods.slice/kubepods-besteffort.slice
	/system.slice/kmod-static-nodes.service
	/system.slice/alsa-state.service
	/init.scope
	/user.slice/user-0.slice
	/system.slice/rpc-statd-notify.service
	/system.slice/livesys-late.service
	/system.slice/system-systemd\x2dbacklight.slice
	/system.slice/mcelog.service
	/user.slice/user-0.slice/user@0.service
	/system.slice/systemd-sysctl.service
	/system.slice/kube-controller-manager.service
	/system.slice/dbus.service
	/system.slice/auditd.service
	/system.slice/abrtd.service
	/system.slice/firewalld.service
	/system.slice/kubelet.service
	/system.slice/systemd-udev-settle.service
	/system.slice/systemd-remount-fs.service

@zyxtech

This comment has been minimized.

Show comment
Hide comment
@zyxtech

zyxtech Feb 5, 2018

meet the same error
os centos7.4(vagrant 2.0.1 ,virtualbox 5.2.2r119230 virtualization)
kubelet 1.9.2

journalctl -xeu kubelet

kubelet[11291]: E0205 05:57:29.768433 11291 container_manager_linux.go:583] [ContainerManager]: [ContainerManager]: Fail to get rootfs information unable to find data for container /

zyxtech commented Feb 5, 2018

meet the same error
os centos7.4(vagrant 2.0.1 ,virtualbox 5.2.2r119230 virtualization)
kubelet 1.9.2

journalctl -xeu kubelet

kubelet[11291]: E0205 05:57:29.768433 11291 container_manager_linux.go:583] [ContainerManager]: [ContainerManager]: Fail to get rootfs information unable to find data for container /

@ingvagabund

This comment has been minimized.

Show comment
Hide comment
@ingvagabund

ingvagabund Feb 9, 2018

Contributor

Ughhh: #40050

With kubelet.kubeconfig:

apiVersion: v1
kind: Config
clusters:
  - cluster:
      server: http://127.0.0.1:8080/
    name: local
contexts:
  - context:
      cluster: local
    name: local
current-context: local

and setting --fail-swap-on=false --kubeconfig=/var/lib/kubelet/kubelet.kubeconfig I was able to create local cluster with f27 build of kubernetes-1.9.1

Contributor

ingvagabund commented Feb 9, 2018

Ughhh: #40050

With kubelet.kubeconfig:

apiVersion: v1
kind: Config
clusters:
  - cluster:
      server: http://127.0.0.1:8080/
    name: local
contexts:
  - context:
      cluster: local
    name: local
current-context: local

and setting --fail-swap-on=false --kubeconfig=/var/lib/kubelet/kubelet.kubeconfig I was able to create local cluster with f27 build of kubernetes-1.9.1

@ingvagabund

This comment has been minimized.

Show comment
Hide comment
@ingvagabund

ingvagabund Feb 9, 2018

Contributor

@voor can you verify?

Contributor

ingvagabund commented Feb 9, 2018

@voor can you verify?

@drpaneas

This comment has been minimized.

Show comment
Hide comment
@drpaneas

drpaneas Feb 19, 2018

I have similar error from kubelet logs:

hyperkube[13020]: E0219 17:36:12.732825   13020 container_manager_linux.go:583] [ContainerManager]: Fail to get rootfs information failed to get device for dir "/var/lib/kubelet": could not find device with major: 0, minor: 52 in cached partitions map

Applying the configuration posted by @ingvagabund

d111:~ # cat /etc/kubernetes/kubelet 


###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on
KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=d111.qam.suse.de"

# Add your own!
KUBELET_ARGS="--fail-swap-on=false --kubeconfig=/var/lib/kubelet/kubelet.kubeconfig --pod-manifest-path=/etc/kubernetes/manifests"

it seems that my node (d111 in this example) is now registered.

d295:~ # kubectl get nodes
NAME               STATUS    ROLES     AGE       VERSION
d111.qam.suse.de   Ready     <none>    20s       v1.9.3

However, this error is still there.

Distribution: openSUSE Tumbleweed

drpaneas commented Feb 19, 2018

I have similar error from kubelet logs:

hyperkube[13020]: E0219 17:36:12.732825   13020 container_manager_linux.go:583] [ContainerManager]: Fail to get rootfs information failed to get device for dir "/var/lib/kubelet": could not find device with major: 0, minor: 52 in cached partitions map

Applying the configuration posted by @ingvagabund

d111:~ # cat /etc/kubernetes/kubelet 


###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on
KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=d111.qam.suse.de"

# Add your own!
KUBELET_ARGS="--fail-swap-on=false --kubeconfig=/var/lib/kubelet/kubelet.kubeconfig --pod-manifest-path=/etc/kubernetes/manifests"

it seems that my node (d111 in this example) is now registered.

d295:~ # kubectl get nodes
NAME               STATUS    ROLES     AGE       VERSION
d111.qam.suse.de   Ready     <none>    20s       v1.9.3

However, this error is still there.

Distribution: openSUSE Tumbleweed

@kow3ns kow3ns added this to Backlog in Workloads Feb 27, 2018

@chestack

This comment has been minimized.

Show comment
Hide comment
@chestack

chestack Mar 5, 2018

Contributor

+1

kubernets: v1.9.2
docker: 17.03.2-ce

on host, du command hang there when this error happening

Contributor

chestack commented Mar 5, 2018

+1

kubernets: v1.9.2
docker: 17.03.2-ce

on host, du command hang there when this error happening

@xandriaw

This comment has been minimized.

Show comment
Hide comment
@xandriaw

xandriaw Mar 27, 2018

I'm also getting a similar error on VM trying to join an initiated Master as in the tutorial
Kubernetes v1.9.6
Docker version 18.03.0-ce, build 0520e24

xandriaw commented Mar 27, 2018

I'm also getting a similar error on VM trying to join an initiated Master as in the tutorial
Kubernetes v1.9.6
Docker version 18.03.0-ce, build 0520e24

@somedayiamold

This comment has been minimized.

Show comment
Hide comment
@somedayiamold

somedayiamold Apr 11, 2018

Met the same problem but seems no impact on cluster
kubernetes: v1.9.1
docker: 1.13.1

somedayiamold commented Apr 11, 2018

Met the same problem but seems no impact on cluster
kubernetes: v1.9.1
docker: 1.13.1

@wuhanlin007

This comment has been minimized.

Show comment
Hide comment
@wuhanlin007

wuhanlin007 Apr 12, 2018

i get same error in centos
kubernetes:1.9.3

Fail to get rootfs information unable to find data for container

wuhanlin007 commented Apr 12, 2018

i get same error in centos
kubernetes:1.9.3

Fail to get rootfs information unable to find data for container

@mritd

This comment has been minimized.

Show comment
Hide comment
@mritd

mritd Apr 17, 2018

Same issue on 1.10.1(ubuntu 16.04)

k1.node ➜  ~ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.1", GitCommit:"d4ab47518836c750f9949b9e0d387f20fb92260b", GitTreeState:"clean", BuildDate:"2018-04-12T14:14:26Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.1", GitCommit:"d4ab47518836c750f9949b9e0d387f20fb92260b", GitTreeState:"clean", BuildDate:"2018-04-12T14:14:26Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

k1.node ➜  ~ docker info
Containers: 9
 Running: 9
 Paused: 0
 Stopped: 0
Images: 11
Server Version: 18.03.0-ce
Storage Driver: overlay2
 Backing Filesystem: xfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: cfd04396dc68220d1cecbe686a6cc3aa5ce3667c
runc version: 4fc53a81fb7c994640722ac585fa9ca548971871
init version: 949e6fa
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.4.0-119-generic
Operating System: Ubuntu 16.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.855GiB
Name: k1.node
ID: G6CA:XNTB:CGDE:H6L7:BQ5D:NOTY:PJWW:JG57:VBWQ:5KZA:FNBI:G46U
Docker Root Dir: /data/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

mritd commented Apr 17, 2018

Same issue on 1.10.1(ubuntu 16.04)

k1.node ➜  ~ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.1", GitCommit:"d4ab47518836c750f9949b9e0d387f20fb92260b", GitTreeState:"clean", BuildDate:"2018-04-12T14:14:26Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.1", GitCommit:"d4ab47518836c750f9949b9e0d387f20fb92260b", GitTreeState:"clean", BuildDate:"2018-04-12T14:14:26Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

k1.node ➜  ~ docker info
Containers: 9
 Running: 9
 Paused: 0
 Stopped: 0
Images: 11
Server Version: 18.03.0-ce
Storage Driver: overlay2
 Backing Filesystem: xfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: cfd04396dc68220d1cecbe686a6cc3aa5ce3667c
runc version: 4fc53a81fb7c994640722ac585fa9ca548971871
init version: 949e6fa
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.4.0-119-generic
Operating System: Ubuntu 16.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.855GiB
Name: k1.node
ID: G6CA:XNTB:CGDE:H6L7:BQ5D:NOTY:PJWW:JG57:VBWQ:5KZA:FNBI:G46U
Docker Root Dir: /data/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false
@bishnuroy

This comment has been minimized.

Show comment
Hide comment
@bishnuroy

bishnuroy Apr 19, 2018

Service is running but getting the same error.

Apr 19 09:51:40 bflk8smin903 kubelet[1463]: mount: /tmp/configdrive494564793: /dev/vdb already mounted on /media/configdrive.
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: W0419 09:51:40.037669    1463 server.go:387] invalid kubeconfig: stat /etc/kubernetes/ssl/kubeconfig.yaml: no such file or directory
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: W0419 09:51:40.209452    1463 server.go:236] No api server defined - no events will be sent to API server.
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: I0419 09:51:40.209513    1463 server.go:428] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: I0419 09:51:40.210495    1463 container_manager_linux.go:242] container manager verified user specified cgroup-root exists: /
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: I0419 09:51:40.210547    1463 container_manager_linux.go:247] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgrou
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: I0419 09:51:40.210809    1463 container_manager_linux.go:266] Creating device plugin manager: false
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: I0419 09:51:40.211275    1463 kubelet.go:291] Adding manifest path: /etc/kubernetes/manifests
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: W0419 09:51:40.215980    1463 kubelet_network.go:139] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: I0419 09:51:40.216048    1463 kubelet.go:577] Hairpin mode set to "hairpin-veth"
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: I0419 09:51:40.217927    1463 client.go:80] Connecting to docker on unix:///var/run/docker.sock
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: I0419 09:51:40.217989    1463 client.go:109] Start docker client with request timeout=2m0s
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: W0419 09:51:40.220368    1463 cni.go:171] Unable to update cni config: No networks found in /etc/kubernetes/cni/net.d
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: I0419 09:51:40.227899    1463 docker_service.go:232] Docker cri networking managed by kubernetes.io/no-op
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: I0419 09:51:40.242883    1463 docker_service.go:237] Docker Info: &{ID:3KBG:CFCH:Y5TD:VYDL:QWUI:KHYL:P6AP:GMLA:7V5U:UPFD:CTCE:7QXD Containers:0 ContainersRunning:0 Contai
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: I0419 09:51:40.243048    1463 docker_service.go:250] Setting cgroupDriver to cgroupfs
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: I0419 09:51:40.278377    1463 remote_runtime.go:43] Connecting to runtime service unix:///var/run/dockershim.sock
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: I0419 09:51:40.281744    1463 kuberuntime_manager.go:186] Container runtime docker initialized, version: 17.12.1-ce, apiVersion: 1.35.0
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: I0419 09:51:40.286313    1463 server.go:755] Started kubelet
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: W0419 09:51:40.286435    1463 kubelet.go:1365] No api server defined - no node status update will be sent.
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: E0419 09:51:40.286625    1463 kubelet.go:1281] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable t
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: I0419 09:51:40.286752    1463 server.go:129] Starting to listen on 0.0.0.0:10250
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: I0419 09:51:40.288237    1463 server.go:299] Adding debug handlers to kubelet server.
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: I0419 09:51:40.288860    1463 kubelet_node_status.go:273] Setting node annotation to enable volume controller attach/detach
Apr 19 09:51:41 bflk8smin903 kubelet[1463]: I0419 09:51:41.032670    1463 kubelet_node_status.go:329] Adding node label from cloud provider: beta.kubernetes.io/instance-type=5
Apr 19 09:51:41 bflk8smin903 kubelet[1463]: I0419 09:51:41.032724    1463 kubelet_node_status.go:340] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=nova
Apr 19 09:51:41 bflk8smin903 kubelet[1463]: I0419 09:51:41.032736    1463 kubelet_node_status.go:344] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=RegionOne
Apr 19 09:51:41 bflk8smin903 kubelet[1463]: I0419 09:51:41.483187    1463 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
Apr 19 09:51:41 bflk8smin903 kubelet[1463]: I0419 09:51:41.483254    1463 status_manager.go:136] Kubernetes client is nil, not starting status manager.
Apr 19 09:51:41 bflk8smin903 kubelet[1463]: I0419 09:51:41.483276    1463 kubelet.go:1772] Starting kubelet main sync loop.
Apr 19 09:51:41 bflk8smin903 kubelet[1463]: I0419 09:51:41.483378    1463 kubelet.go:1789] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.8547
Apr 19 09:51:41 bflk8smin903 kubelet[1463]: I0419 09:51:41.483384    1463 volume_manager.go:247] Starting Kubelet Volume Manager
Apr 19 09:51:41 bflk8smin903 kubelet[1463]: E0419 09:51:41.483531    1463 container_manager_linux.go:583] [ContainerManager]: Fail to get rootfs information unable to find data for container /
Apr 19 09:51:41 bflk8smin903 kubelet[1463]: I0419 09:51:41.583641    1463 kubelet.go:1789] skipping pod synchronization - [container runtime is down]
Apr 19 09:51:41 bflk8smin903 kubelet[1463]: I0419 09:51:41.783845    1463 kubelet.go:1789] skipping pod synchronization - [container runtime is down]
Apr 19 09:51:42 bflk8smin903 kubelet[1463]: I0419 09:51:42.184552    1463 kubelet.go:1789] skipping pod synchronization - [container runtime is down]
Apr 19 09:51:42 bflk8smin903 kubelet[1463]: E0419 09:51:42.483729    1463 container_manager_linux.go:583] [ContainerManager]: Fail to get rootfs information unable to find data for container /
Apr 19 09:51:42 bflk8smin903 kubelet[1463]: I0419 09:51:42.984757    1463 kubelet.go:1789] skipping pod synchronization - [container runtime is down]
Apr 19 09:51:43 bflk8smin903 kubelet[1463]: E0419 09:51:43.484485    1463 container_manager_linux.go:583] [ContainerManager]: Fail to get rootfs information unable to find data for container /
Apr 19 09:51:43 bflk8smin903 kubelet[1463]: I0419 09:51:43.941598    1463 kubelet_node_status.go:273] Setting node annotation to enable volume controller attach/detach
Apr 19 09:51:44 bflk8smin903 kubelet[1463]: I0419 09:51:44.593736    1463 kubelet_node_status.go:329] Adding node label from cloud provider: beta.kubernetes.io/instance-type=5
Apr 19 09:51:44 bflk8smin903 kubelet[1463]: I0419 09:51:44.593793    1463 kubelet_node_status.go:340] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=nova
Apr 19 09:51:44 bflk8smin903 kubelet[1463]: I0419 09:51:44.593806    1463 kubelet_node_status.go:344] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=RegionOne

bishnuroy commented Apr 19, 2018

Service is running but getting the same error.

Apr 19 09:51:40 bflk8smin903 kubelet[1463]: mount: /tmp/configdrive494564793: /dev/vdb already mounted on /media/configdrive.
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: W0419 09:51:40.037669    1463 server.go:387] invalid kubeconfig: stat /etc/kubernetes/ssl/kubeconfig.yaml: no such file or directory
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: W0419 09:51:40.209452    1463 server.go:236] No api server defined - no events will be sent to API server.
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: I0419 09:51:40.209513    1463 server.go:428] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: I0419 09:51:40.210495    1463 container_manager_linux.go:242] container manager verified user specified cgroup-root exists: /
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: I0419 09:51:40.210547    1463 container_manager_linux.go:247] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgrou
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: I0419 09:51:40.210809    1463 container_manager_linux.go:266] Creating device plugin manager: false
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: I0419 09:51:40.211275    1463 kubelet.go:291] Adding manifest path: /etc/kubernetes/manifests
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: W0419 09:51:40.215980    1463 kubelet_network.go:139] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: I0419 09:51:40.216048    1463 kubelet.go:577] Hairpin mode set to "hairpin-veth"
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: I0419 09:51:40.217927    1463 client.go:80] Connecting to docker on unix:///var/run/docker.sock
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: I0419 09:51:40.217989    1463 client.go:109] Start docker client with request timeout=2m0s
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: W0419 09:51:40.220368    1463 cni.go:171] Unable to update cni config: No networks found in /etc/kubernetes/cni/net.d
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: I0419 09:51:40.227899    1463 docker_service.go:232] Docker cri networking managed by kubernetes.io/no-op
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: I0419 09:51:40.242883    1463 docker_service.go:237] Docker Info: &{ID:3KBG:CFCH:Y5TD:VYDL:QWUI:KHYL:P6AP:GMLA:7V5U:UPFD:CTCE:7QXD Containers:0 ContainersRunning:0 Contai
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: I0419 09:51:40.243048    1463 docker_service.go:250] Setting cgroupDriver to cgroupfs
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: I0419 09:51:40.278377    1463 remote_runtime.go:43] Connecting to runtime service unix:///var/run/dockershim.sock
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: I0419 09:51:40.281744    1463 kuberuntime_manager.go:186] Container runtime docker initialized, version: 17.12.1-ce, apiVersion: 1.35.0
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: I0419 09:51:40.286313    1463 server.go:755] Started kubelet
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: W0419 09:51:40.286435    1463 kubelet.go:1365] No api server defined - no node status update will be sent.
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: E0419 09:51:40.286625    1463 kubelet.go:1281] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable t
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: I0419 09:51:40.286752    1463 server.go:129] Starting to listen on 0.0.0.0:10250
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: I0419 09:51:40.288237    1463 server.go:299] Adding debug handlers to kubelet server.
Apr 19 09:51:40 bflk8smin903 kubelet[1463]: I0419 09:51:40.288860    1463 kubelet_node_status.go:273] Setting node annotation to enable volume controller attach/detach
Apr 19 09:51:41 bflk8smin903 kubelet[1463]: I0419 09:51:41.032670    1463 kubelet_node_status.go:329] Adding node label from cloud provider: beta.kubernetes.io/instance-type=5
Apr 19 09:51:41 bflk8smin903 kubelet[1463]: I0419 09:51:41.032724    1463 kubelet_node_status.go:340] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=nova
Apr 19 09:51:41 bflk8smin903 kubelet[1463]: I0419 09:51:41.032736    1463 kubelet_node_status.go:344] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=RegionOne
Apr 19 09:51:41 bflk8smin903 kubelet[1463]: I0419 09:51:41.483187    1463 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
Apr 19 09:51:41 bflk8smin903 kubelet[1463]: I0419 09:51:41.483254    1463 status_manager.go:136] Kubernetes client is nil, not starting status manager.
Apr 19 09:51:41 bflk8smin903 kubelet[1463]: I0419 09:51:41.483276    1463 kubelet.go:1772] Starting kubelet main sync loop.
Apr 19 09:51:41 bflk8smin903 kubelet[1463]: I0419 09:51:41.483378    1463 kubelet.go:1789] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.8547
Apr 19 09:51:41 bflk8smin903 kubelet[1463]: I0419 09:51:41.483384    1463 volume_manager.go:247] Starting Kubelet Volume Manager
Apr 19 09:51:41 bflk8smin903 kubelet[1463]: E0419 09:51:41.483531    1463 container_manager_linux.go:583] [ContainerManager]: Fail to get rootfs information unable to find data for container /
Apr 19 09:51:41 bflk8smin903 kubelet[1463]: I0419 09:51:41.583641    1463 kubelet.go:1789] skipping pod synchronization - [container runtime is down]
Apr 19 09:51:41 bflk8smin903 kubelet[1463]: I0419 09:51:41.783845    1463 kubelet.go:1789] skipping pod synchronization - [container runtime is down]
Apr 19 09:51:42 bflk8smin903 kubelet[1463]: I0419 09:51:42.184552    1463 kubelet.go:1789] skipping pod synchronization - [container runtime is down]
Apr 19 09:51:42 bflk8smin903 kubelet[1463]: E0419 09:51:42.483729    1463 container_manager_linux.go:583] [ContainerManager]: Fail to get rootfs information unable to find data for container /
Apr 19 09:51:42 bflk8smin903 kubelet[1463]: I0419 09:51:42.984757    1463 kubelet.go:1789] skipping pod synchronization - [container runtime is down]
Apr 19 09:51:43 bflk8smin903 kubelet[1463]: E0419 09:51:43.484485    1463 container_manager_linux.go:583] [ContainerManager]: Fail to get rootfs information unable to find data for container /
Apr 19 09:51:43 bflk8smin903 kubelet[1463]: I0419 09:51:43.941598    1463 kubelet_node_status.go:273] Setting node annotation to enable volume controller attach/detach
Apr 19 09:51:44 bflk8smin903 kubelet[1463]: I0419 09:51:44.593736    1463 kubelet_node_status.go:329] Adding node label from cloud provider: beta.kubernetes.io/instance-type=5
Apr 19 09:51:44 bflk8smin903 kubelet[1463]: I0419 09:51:44.593793    1463 kubelet_node_status.go:340] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=nova
Apr 19 09:51:44 bflk8smin903 kubelet[1463]: I0419 09:51:44.593806    1463 kubelet_node_status.go:344] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=RegionOne
@Vogtinator

This comment has been minimized.

Show comment
Hide comment
@Vogtinator

Vogtinator May 3, 2018

Major 0 is used by the linux kernel for mountpoints which do not actually have a direct backing device. This applies to tmpfs, nsfs, overlayfs and also btrfs subvolumes for example.

Vogtinator commented May 3, 2018

Major 0 is used by the linux kernel for mountpoints which do not actually have a direct backing device. This applies to tmpfs, nsfs, overlayfs and also btrfs subvolumes for example.

@voor

This comment has been minimized.

Show comment
Hide comment
@voor

voor May 3, 2018

I wonder if it's a selinux issue with the mounting of the volume, then?

voor commented May 3, 2018

I wonder if it's a selinux issue with the mounting of the volume, then?

@Vogtinator

This comment has been minimized.

Show comment
Hide comment
@Vogtinator

Vogtinator May 3, 2018

These might actually be different errors:

Fail to get rootfs information failed to get device for dir /var/lib/kubelet vs.
Fail to get rootfs information unable to find data for container /.

I wonder if it's a selinux issue with the mounting of the volume, then?

I don't think so.

Vogtinator commented May 3, 2018

These might actually be different errors:

Fail to get rootfs information failed to get device for dir /var/lib/kubelet vs.
Fail to get rootfs information unable to find data for container /.

I wonder if it's a selinux issue with the mounting of the volume, then?

I don't think so.

@vincentwu2011

This comment has been minimized.

Show comment
Hide comment
@vincentwu2011

vincentwu2011 May 11, 2018

+1
k8s: 1.9.6
docker: 1.13.1

Sometimes the issue could be fixed by reboot the node.

vincentwu2011 commented May 11, 2018

+1
k8s: 1.9.6
docker: 1.13.1

Sometimes the issue could be fixed by reboot the node.

@fjammes

This comment has been minimized.

Show comment
Hide comment
@fjammes

fjammes May 12, 2018

+1
Having error:
Fail to get rootfs information unable to find data for container /
kubeadm, 1.9.1-0
kubectl, 1.9.1-0
kubelet, 1.9.1-0
kubernetes-cni, 0.6.0-0

docker: 18.03.1-ce
overlay2 on xfs

fjammes commented May 12, 2018

+1
Having error:
Fail to get rootfs information unable to find data for container /
kubeadm, 1.9.1-0
kubectl, 1.9.1-0
kubelet, 1.9.1-0
kubernetes-cni, 0.6.0-0

docker: 18.03.1-ce
overlay2 on xfs

@chlyyy

This comment has been minimized.

Show comment
Hide comment
@chlyyy

chlyyy commented May 18, 2018

+1

@wskinner

This comment has been minimized.

Show comment
Hide comment
@wskinner

wskinner May 23, 2018

ME too.
kubernetes 1.9.3
ubuntu 16.04
docker 17.03.2-ce, build f5ec1e2

wskinner commented May 23, 2018

ME too.
kubernetes 1.9.3
ubuntu 16.04
docker 17.03.2-ce, build f5ec1e2

@sgmiller

This comment has been minimized.

Show comment
Hide comment
@sgmiller

sgmiller Jun 15, 2018

Same, Kubernetes 1.9.5, ubuntu 16.04, docker 18.03.1-ce

sgmiller commented Jun 15, 2018

Same, Kubernetes 1.9.5, ubuntu 16.04, docker 18.03.1-ce

@JetMuffin

This comment has been minimized.

Show comment
Hide comment
@JetMuffin

JetMuffin Jun 27, 2018

I got a similar error on kubernetes 1.10.2 and kubelet 1.10.2, here are some logs:

Jun 27 17:16:10 zth kubelet[151929]: I0627 17:16:10.319443  151929 cpu_manager.go:155] [cpumanager] starting with none policy
Jun 27 17:16:10 zth kubelet[151929]: I0627 17:16:10.319455  151929 cpu_manager.go:156] [cpumanager] reconciling every 10s
Jun 27 17:16:10 zth kubelet[151929]: I0627 17:16:10.319467  151929 policy_none.go:42] [cpumanager] none policy: Start
Jun 27 17:16:10 zth kubelet[151929]: F0627 17:16:10.319490  151929 kubelet.go:1359] Failed to start ContainerManager failed to get rootfs info: unable to find data for container /
Jun 27 17:17:00 zth systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a

JetMuffin commented Jun 27, 2018

I got a similar error on kubernetes 1.10.2 and kubelet 1.10.2, here are some logs:

Jun 27 17:16:10 zth kubelet[151929]: I0627 17:16:10.319443  151929 cpu_manager.go:155] [cpumanager] starting with none policy
Jun 27 17:16:10 zth kubelet[151929]: I0627 17:16:10.319455  151929 cpu_manager.go:156] [cpumanager] reconciling every 10s
Jun 27 17:16:10 zth kubelet[151929]: I0627 17:16:10.319467  151929 policy_none.go:42] [cpumanager] none policy: Start
Jun 27 17:16:10 zth kubelet[151929]: F0627 17:16:10.319490  151929 kubelet.go:1359] Failed to start ContainerManager failed to get rootfs info: unable to find data for container /
Jun 27 17:17:00 zth systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
@daicang

This comment has been minimized.

Show comment
Hide comment
@daicang

daicang Jun 28, 2018

In my case, seems it's caused by force kill kubelet without cleaning Pod mountpoints. Delete /var/lib/kubelet/pods/{pod-with-volume-mount} would fix this issue. Thanks @Vogtinator for hints.

daicang commented Jun 28, 2018

In my case, seems it's caused by force kill kubelet without cleaning Pod mountpoints. Delete /var/lib/kubelet/pods/{pod-with-volume-mount} would fix this issue. Thanks @Vogtinator for hints.

@JetMuffin

This comment has been minimized.

Show comment
Hide comment
@JetMuffin

JetMuffin Jul 2, 2018

After debugging I found it's caused by cgroup in my case. Similar to #55386, on my server cpuacct.stat has 6 fields which expected to be 4 and cadvisor cannot collect container metrics.

Modify some codes at vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/cpuacct.go#L88 and recompile kubelet works for me.

...
    fields := strings.Fields(string(data))
    if len(fields) != 4 && len(fields) != 6{
        return 0, 0, fmt.Errorf("failure - %s is expected to have 4 or 6 fields", filepath.Join(path, cgroupCpuacctStat))
    }

JetMuffin commented Jul 2, 2018

After debugging I found it's caused by cgroup in my case. Similar to #55386, on my server cpuacct.stat has 6 fields which expected to be 4 and cadvisor cannot collect container metrics.

Modify some codes at vendor/github.com/opencontainers/runc/libcontainer/cgroups/fs/cpuacct.go#L88 and recompile kubelet works for me.

...
    fields := strings.Fields(string(data))
    if len(fields) != 4 && len(fields) != 6{
        return 0, 0, fmt.Errorf("failure - %s is expected to have 4 or 6 fields", filepath.Join(path, cgroupCpuacctStat))
    }
@timchenxiaoyu

This comment has been minimized.

Show comment
Hide comment
@timchenxiaoyu

timchenxiaoyu Aug 16, 2018

Contributor

k8s 1.6.4 docker 1.12.6 also this problem

E0816 16:21:12.121683 415225 kubelet.go:1165] Image garbage collection failed: unable to find data for container /
E0816 16:21:12.227843 415225 kubelet.go:1661] Failed to check if disk space is available for the runtime: failed to get fs info for "runtime": unable to find data for container /
E0816 16:21:12.227869 415225 kubelet.go:1669] Failed to check if disk space is available on the root partition: failed to get fs info for "root": unable to find data for container /
E0816 16:21:12.234638 415225 kubelet.go:1661] Failed to check if disk space is available for the runtime: failed to get fs info for "runtime": unable to find data for container /
E0816 16:21:12.234660 415225 kubelet.go:1669] Failed to check if disk space is available on the root partition: failed to get fs info for "root": unable to find data for container /

Contributor

timchenxiaoyu commented Aug 16, 2018

k8s 1.6.4 docker 1.12.6 also this problem

E0816 16:21:12.121683 415225 kubelet.go:1165] Image garbage collection failed: unable to find data for container /
E0816 16:21:12.227843 415225 kubelet.go:1661] Failed to check if disk space is available for the runtime: failed to get fs info for "runtime": unable to find data for container /
E0816 16:21:12.227869 415225 kubelet.go:1669] Failed to check if disk space is available on the root partition: failed to get fs info for "root": unable to find data for container /
E0816 16:21:12.234638 415225 kubelet.go:1661] Failed to check if disk space is available for the runtime: failed to get fs info for "runtime": unable to find data for container /
E0816 16:21:12.234660 415225 kubelet.go:1669] Failed to check if disk space is available on the root partition: failed to get fs info for "root": unable to find data for container /

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment