Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

K3s docker image cannot be run on a cgroup v2 host #4085

Closed
brandond opened this issue Sep 27, 2021 · 1 comment
Closed

K3s docker image cannot be run on a cgroup v2 host #4085

brandond opened this issue Sep 27, 2021 · 1 comment

Comments

@brandond
Copy link
Contributor

Running Docker images on a cgroup v2 host fails:

docker run -it --rm --privileged --name k3s-server-1 --hostname k3s-server-1 rancher/k3s:v1.22.2-k3s1 server

E0927 22:36:42.258152       1 node_container_manager_linux.go:60] "Failed to create cgroup" err="cannot enter cgroupv2 \"/sys/fs/cgroup/kubepods\" with domain controllers -- it is in an invalid state" cgroupName=[kubepods]
E0927 22:36:42.258178       1 kubelet.go:1423] "Failed to start ContainerManager" err="cannot enter cgroupv2 \"/sys/fs/cgroup/kubepods\" with domain controllers -- it is in an invalid state"
W0927 22:36:42.258172       1 watcher.go:95] Error while processing event ("/sys/fs/cgroup/kubepods": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods: no such file or directory

The same happens with v1.21.5.

v1.20.11 throws a stack trace:

F0927 22:39:54.054203       1 kubelet.go:1367] Failed to start ContainerManager cannot enter cgroupv2 "/sys/fs/cgroup/kubepods" with domain controllers -- it is in an invalid state
INFO[2021-09-27T22:39:54.174703175Z] Starting /v1, Kind=Secret controller
goroutine 15054 [running]:
github.com/rancher/k3s/vendor/k8s.io/klog/v2.stacks(0xc000010001, 0xc000db6000, 0xb5, 0x24d)
	/go/src/github.com/rancher/k3s/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
github.com/rancher/k3s/vendor/k8s.io/klog/v2.(*loggingT).output(0x7273600, 0xc000000003, 0x0, 0x0, 0xc00fc10e00, 0x6f0112f, 0xa, 0x557, 0x0)
	/go/src/github.com/rancher/k3s/vendor/k8s.io/klog/v2/klog.go:975 +0x19b
github.com/rancher/k3s/vendor/k8s.io/klog/v2.(*loggingT).printf(0x7273600, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x47bf96e, 0x23, 0xc0160d7a60, 0x1, ...)
	/go/src/github.com/rancher/k3s/vendor/k8s.io/klog/v2/klog.go:750 +0x191
github.com/rancher/k3s/vendor/k8s.io/klog/v2.Fatalf(...)
	/go/src/github.com/rancher/k3s/vendor/k8s.io/klog/v2/klog.go:1502
github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/kubelet.(*Kubelet).initializeRuntimeDependentModules(0xc007968a80)
	/go/src/github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:1367 +0x446
sync.(*Once).doSlow(0xc0079692d0, 0xc0042c9de8)
	/usr/local/go/src/sync/once.go:66 +0xec
sync.(*Once).Do(...)
	/usr/local/go/src/sync/once.go:57
github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/kubelet.(*Kubelet).updateRuntimeUp(0xc007968a80)
	/go/src/github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:2199 +0x659
github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0160a4550)
	/go/src/github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0160a4550, 0x4e56a20, 0xc0160e9200, 0xc00ffaa701, 0xc0000c8660)
	/go/src/github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0160a4550, 0x12a05f200, 0x0, 0x1, 0xc0000c8660)
	/go/src/github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc0160a4550, 0x12a05f200, 0xc0000c8660)
	/go/src/github.com/rancher/k3s/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
created by github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/kubelet.(*Kubelet).Run
	/go/src/github.com/rancher/k3s/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet.go:1420 +0x16a

@iwilltry42 proposed a fix for this in #3237 that wrapped K3s in a shell entrypoint, but we requested that the code be moved into Go and closed the PR without ever doing so.

@rancher-max
Copy link
Contributor

Validated this is working with docker install in v1.22.2-rc1+k3s2:

$ docker exec -it k3s-server-1 kubectl  get nodes,pods -A -o wide
NAME                STATUS   ROLES                  AGE   VERSION            INTERNAL-IP   EXTERNAL-IP   OS-IMAGE   KERNEL-VERSION           CONTAINER-RUNTIME
node/k3s-server-1   Ready    control-plane,master   80s   v1.22.2-rc1+k3s2   172.17.0.2    <none>        K3s dev    5.8.15-301.fc33.x86_64   containerd://1.5.7-k3s1

NAMESPACE     NAME                                         READY   STATUS      RESTARTS   AGE   IP          NODE           NOMINATED NODE   READINESS GATES
kube-system   pod/local-path-provisioner-64ffb68fd-lgdgj   1/1     Running     0          66s   10.42.0.3   k3s-server-1   <none>           <none>
kube-system   pod/metrics-server-9cf544f65-gsw2s           1/1     Running     0          66s   10.42.0.4   k3s-server-1   <none>           <none>
kube-system   pod/coredns-85cb69466-zcnxk                  1/1     Running     0          66s   10.42.0.5   k3s-server-1   <none>           <none>
kube-system   pod/helm-install-traefik-crd--1-d2f47        0/1     Completed   0          66s   10.42.0.6   k3s-server-1   <none>           <none>
kube-system   pod/helm-install-traefik--1-j5mbr            0/1     Completed   1          66s   10.42.0.2   k3s-server-1   <none>           <none>
kube-system   pod/svclb-traefik-4d8g6                      2/2     Running     0          39s   10.42.0.7   k3s-server-1   <none>           <none>
kube-system   pod/traefik-74dd4975f9-ddk62                 1/1     Running     0          39s   10.42.0.8   k3s-server-1   <none>           <none>

$ stat -fc %T /sys/fs/cgroup/
cgroup2fs

Development [DEPRECATED] automation moved this from To Test to Done Issue / Merged PR Oct 5, 2021
michielboekhoff added a commit to michielboekhoff/terraform-provider-kubectl that referenced this issue Sep 7, 2022
This allows running k3s on hosts with cgroup-v2, as per this issue:
k3s-io/k3s#4085
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
No open projects
Development

No branches or pull requests

2 participants