Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to mount rootfs component with overlay filesystem #2755

Closed
jiayihu opened this issue Dec 27, 2020 · 3 comments
Closed

Failed to mount rootfs component with overlay filesystem #2755

jiayihu opened this issue Dec 27, 2020 · 3 comments

Comments

@jiayihu
Copy link

jiayihu commented Dec 27, 2020

Environmental Info:
K3s Version: v1.17.5+k3s1

Node(s) CPU architecture, OS, and Version: Raspberry Pi 3B+ with Alpine Linux ARM64

Cluster Configuration: 1 master, 2 workers

Describe the bug:

Both the master and the workers are unable to mount rootfs and thus to create pods. I have /var/lib/rancher created as overlay folder stored on a ext4 storage. The overlay file system has been created following the steps described in https://wiki.alpinelinux.org/wiki/Raspberry_Pi#Persistent_storage, in particular the example about /usr.

Steps To Reproduce:

  • Just install k3s

Additional context / logs:

Example of log:

E1227 18:19:01.035016    3323 pod_workers.go:191] Error syncing pod 095adbda-54ef-4818-9ea2-12f6c4b4aa35 ("coredns-6c6bb68b64-nbzzs_kube-system(095adbda-54ef-4818-9ea2-12f6c4b4aa35)"),
skipping: failed to "CreatePodSandbox" for "coredns-6c6bb68b64-nbzzs_kube-system(095adbda-54ef-4818-9ea2-12f6c4b4aa35)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-6c6bb68b64-nbzzs_kube-system(095adbda-54ef-4818-9ea2-12f6c4b4aa35)\"
failed: rpc error: code = Unknown desc = failed to create containerd task:
failed to mount rootfs component &{overlay overlay [workdir=/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/230/work upperdir=/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/230/fs lowerdir=/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/1/fs]}:
invalid argument: unknown"

In a more readable format, containerd is trying to mount these dirs:

lowerdir= /var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/1/fs
upperdir=/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/230/fs
workdir= /var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/230/work

Does it throw because maybe it's not possible to use /var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/230/fs as upperdir since /var/lib/rancher is a lowerdir and the actual upperdir is /media/persist/rancher on my device?

@jiayihu
Copy link
Author

jiayihu commented Dec 27, 2020

Updating to v1.19.5+k3s2 and using --snapshotter native removes the error but I have the following warning now:

E1227 21:05:03.699917    2384 cri_stats_provider.go:376] Failed to get the info of the filesystem with mountpoint "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.native": unable to find data in memory cache.
E1227 21:05:03.700114    2384 kubelet.go:1218] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem

The node description also shows temporary disk pressure and issues with the disk garbage collection:

  Normal   Starting                 33m                    kube-proxy  Starting kube-proxy.
  Normal   Starting                 33m                    kubelet     Starting kubelet.
  Warning  InvalidDiskCapacity      33m                    kubelet     invalid capacity 0 on image filesystem
  Normal   NodeHasSufficientPID     33m (x2 over 33m)      kubelet     Node rasp-3 status is now: NodeHasSufficientPID
  Normal   NodeNotReady             33m                    kubelet     Node rasp-3 status is now: NodeNotReady
  Normal   NodeHasSufficientMemory  33m (x2 over 33m)      kubelet     Node rasp-3 status is now: NodeHasSufficientMemory
  Warning  Rebooted                 33m                    kubelet     Node rasp-3 has been rebooted, boot id: b92a06fb-5936-43fe-9f8c-d53d6328a876
  Normal   NodeAllocatableEnforced  33m                    kubelet     Updated Node Allocatable limit across pods
  Normal   NodeReady                33m                    kubelet     Node rasp-3 status is now: NodeReady
  Normal   NodeHasDiskPressure      32m                    kubelet     Node rasp-3 status is now: NodeHasDiskPressure
  Warning  EvictionThresholdMet     32m (x2 over 32m)      kubelet     Attempting to reclaim ephemeral-storage
  Normal   NodeHasNoDiskPressure    27m (x3 over 33m)      kubelet     Node rasp-3 status is now: NodeHasNoDiskPressure
  Warning  FreeDiskSpaceFailed      18m                    kubelet     failed to garbage collect required amount of images. Wanted to free 61165568 bytes, but freed 274812 bytes
  Warning  FreeDiskSpaceFailed      13m                    kubelet     failed to garbage collect required amount of images. Wanted to free 60198912 bytes, but freed 7503 bytes
  Warning  ImageGCFailed            13m                    kubelet     failed to garbage collect required amount of images. Wanted to free 60198912 bytes, but freed 7503 bytes
  Warning  FreeDiskSpaceFailed      3m37s (x2 over 8m37s)  kubelet     failed to garbage collect required amount of images. Wanted to free 60166144 bytes, but freed 0 bytes
  Warning  ImageGCFailed            3m37s (x2 over 8m37s)  kubelet     failed to garbage collect required amount of images. Wanted to free 60166144 bytes, but freed 0 bytes

@brandond
Copy link
Member

I don't think you can nest overlayfs mounts like that, at least without the kubelet getting very confused. /var/lib/rancher/k3s or wherever you have your data-dir pointed should be a standard filesystem (ext4/xfs/btrfs) that is supported as an overlayfs lowerdir.

@jiayihu
Copy link
Author

jiayihu commented Dec 28, 2020

There is definitely something wrong with using an overlay fs to store k3s and containerd data, I'll try to configure my cluster to use a more standard fs. Using an overlay fs along with Alpine local backup was a nice experiment but I guess it doesn't play nice with containerd assumptions about the fs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants