Join GitHub today
GitHub is home to over 36 million developers working together to host and review code, manage projects, and build software together.Sign up
Failed to find subsystem mount for required subsystem: pids #16
kubernetes master node starts and kubectl get pods shows Ready status.
kubernetes master node starts, and kubectl get pods shows NotReady status
kubectl describe nodes shows this error in the event log:
not sure - disable whatever requires this cgroup? is it something new in 1.14? or enable that cgroup in rasberian lite somewhere? (I'm not cgroup expert, so I don't know how to even start)
Steps to Reproduce (for bugs)
(follow the guide in this repo, I get these results at the "Check everything worked:" step of the guide)
Can't schedule pods / nodes not ready.
The pids cgroups is not mounted on Raspbian:
I tried adding
For now, I see two options:
@alexellis, any inputs on that?
I was able to get my cluster upgraded to
This does have the issue:
I don't use any burstable pods on my cluster, but until
I had the same issue. Did a
Note: After upgrading in place i had to disable swap again.
I've managed to upgrade my raspbian to buster, but it wasn't error free, the following issues where hit:
iptables in nf_tables mode - kube-proxy only works in legacy mode kubernetes/kubernetes#71305 (comment)
swap would be enabled after each boot:
I noticed a substantial amount of errors being reported from docker reporting that cgroupsfs/net_prio being missing (even though it existed and mounted) - upgrading docker-ce to
Given the issues we are finding with
What do you think?
I'm not sure how k3s would solve the issue at hand? Most of these are issues with either kube-proxy (iptables) and low level cgroups being available from the kernel, which were related to raspbian releases.
I've been running my kubeadm cluster for a few months now, and the only issues I find are during upgrades (OS and kubernetes).
The cluster in it's self is stable for day to day operations and requires little/no picking up.
That being said, I have wondered about moving over to k3s as my etcd instance no longer fits along side the control plane and I have a dedicated RPi for etcd.
My cluster consists of 3x amd64, 2x pine64(arm64), 2x RPi "masters" and 7x RPi workers/slaves.
My current recommendation is to use k3s, it uses far fewer resources and works on ARM very well, no timing issues.
Please try it and let us know if it resolves those issues.