Scripts Used to Work Around K8s Problems
I'm attaching some code here so that I can share it.
This works around:
- kubernetes/kubernetes#64137
- Azure/AKS#750
- kubernetes/kubernetes#73799
- kubernetes/kubernetes#60987
- k3s-io/k3s#294
- kubernetes/kubernetes#70324
and probably some other tickets that aren't properly marked as duplicates yet. Basically, K8s is leaving orphaned mounts behind when pods are cleaned up, which causes CPU usage to linearly increase until the whole node is taken down. This is particularly exacerbated by K8s CronJobs since they have such a short lifecycle. With a few cronjobs running every minute, I am able to take down a two-core AKS cluster node in about a week. Depending on which ticket you look at, this problem is attributed to a failure of K8s to cleanup mounts or a bad interaction between certain kernel versions and certain versions of systemd. My money is on the latter since, on my system, the pod directories (and, thus their mount subdirs) are already gone but systemd still has the cgroup watches registered.
Anyhow, this DaemonSet works around the problem by running a script hourly to search for the orphaned mounts and asking systemd to stop watching them.
kubectl apply -f systemd-cgroup-gc.yaml
kubectl delete -f systemd-cgroup-gc.yaml
Simple Daemonset to set sysctl settings on nodes. Right now it
only sets a higher limit for fs.inotify.max_user_watches
. I
think that the only reason that we are encountering watch limit
issues right now is the same bugs that the system-cgroup-gc
package is working around.
kubectl apply -f node-sysctls.yaml
kubectl delete -f node-sysctls.yaml
Simple Daemonset to reboot nodes regularly. Uses kured to safely trigger a rolling reboot during a specified window, if the node has not been rebooted within a specified timeframe. If kured is not running, then marking the node for reboot will have no effect.
kubectl apply -f regular-reboot.yaml
kubectl delete -f regular-reboot.yaml