You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I see the following error message in kubelet logs:
Jun 00 00:00:00 my-worker-node kubelet[XXXXXXX]: E0000 00:00:00.000000 2571634 info.go:99] Failed to get disk map: open /sys/block/nvme0c0n1/dev: no such file or directory
I'm not an NVMe expert. I've been told that this type of device might only be present when multipathing is enabled.
I'm also not sure what machine/info.go is doing with the information it gets from the block devices, so I'm not sure the impact here, other than there's a lot of errors in my kubelet log output.
Note that for _, disk := range disks { the err is return directly from the disk range, meaning other disks might not be gathered or inspected.
The text was updated successfully, but these errors were encountered:
I see the following error message in
kubelet
logs:This error message matches cadvisor here:
cadvisor/machine/info.go
Line 104 in b7e6727
Looking for usages of
/dev
this is the culprit:cadvisor/utils/sysfs/sysfs.go
Line 208 in b7e6727
Full Stack:
cadvisor/utils/sysfs/sysfs.go
Line 208 in b7e6727
cadvisor/utils/sysinfo/sysinfo.go
Line 63 in b7e6727
cadvisor/machine/info.go
Line 104 in b7e6727
According to linux kernel source: the device name scheme means the device is in subsystem 0, controller 0, namespace 1
https://elixir.bootlin.com/linux/latest/source/drivers/nvme/host/core.c#L4361
This is a device for internal kernel usage only, and it's flagged as such with
GENHD_FL_HIDDEN
.Therefore, it's probably wrong that cadvisor is trying to read this device. It should probably be ignored with a similar rule as here:
cadvisor/utils/sysinfo/sysinfo.go
Line 57 in b7e6727
I'm not an NVMe expert. I've been told that this type of device might only be present when multipathing is enabled.
I'm also not sure what
machine/info.go
is doing with the information it gets from the block devices, so I'm not sure the impact here, other than there's a lot of errors in my kubelet log output.Note that
for _, disk := range disks {
the err is return directly from the disk range, meaning other disks might not be gathered or inspected.The text was updated successfully, but these errors were encountered: