You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When running k3s on a AlmaLinux machine with cgroup2 support, the error message failed to create fsnotify watcher: too many open files occurs.
After investigating, it was found that containerd-shim processes are consuming a large number of inotify instances.
Command used to find the containerd-shim processes
for foo in /proc/*/fd/*; do readlink -f $foo; done | grep inotify | cut -d/ -f3 | xargs -I '{}' -- ps --no-headers -o comm {} | sort | uniq -c | sort -nr
Increasing fs.inotify.max_user_instances = 256 temporarily resolved the issue, but the problem persists.
I have read through issue #10020 on GitHub, which suggested that this issue was resolved in k3s version 1.30.1. However, the problem persists in my setup.
Steps To Reproduce:
Deploy a single node k3s cluster on an AlmaLinux machine with K3s version: v1.30.1
Check the installed cgroup and make sure the current node supports cgroup2
When the number of pods in the cluster is large, you will encounter the error Too many open files.
Use the below command to monitor the usage of inotify instances: for foo in /proc/*/fd/*; do readlink -f $foo; done | grep inotify | cut -d/ -f3 | xargs -I '{}' -- ps --no-headers -o comm {} | sort | uniq -c | sort -nr
Notice that the containerd-shim processes are consuming a large number of inotify instances compared to other processes.
The containerd version should be v1.7.15-k3s1
I expect that the number of inotify instance used by containerd-shim processes should be with in reasonable limits and not lead to Too many open files error messages. Actual behavior:
The containerd-shim processes are consuming a disproportionately high number of inotify instances, leading to the Too many open files error Additional context / logs:
Increasing user.max_inotify_instances temporarily resolved the issue, but the root cause remains that containerd-shim processes are using too many inotify instances.
The text was updated successfully, but these errors were encountered:
Environmental Info:
K3s Version:
OS Version:
Node(s) CPU architecture, OS, and Version:
Cluster Configuration:
single, all in one node
Containerd Version:
Describe the bug:
When running
k3s
on aAlmaLinux
machine withcgroup2
support, the error messagefailed to create fsnotify watcher: too many open files
occurs.After investigating, it was found that
containerd-shim
processes are consuming a large number ofinotify
instances.Command used to find the
containerd-shim
processesfor foo in /proc/*/fd/*; do readlink -f $foo; done | grep inotify | cut -d/ -f3 | xargs -I '{}' -- ps --no-headers -o comm {} | sort | uniq -c | sort -nr
Increasing
fs.inotify.max_user_instances = 256
temporarily resolved the issue, but the problem persists.The node's default values for inotify are
I have read through issue #10020 on GitHub, which suggested that this issue was resolved in k3s version 1.30.1. However, the problem persists in my setup.
Steps To Reproduce:
v1.30.1
cgroup
and make sure the current node supportscgroup2
Too many open files
.for foo in /proc/*/fd/*; do readlink -f $foo; done | grep inotify | cut -d/ -f3 | xargs -I '{}' -- ps --no-headers -o comm {} | sort | uniq -c | sort -nr
containerd-shim
processes are consuming a large number of inotify instances compared to other processes.Expected behavior:
v1.30.1
v1.7.15-k3s1
I expect that the number of
inotify
instance used bycontainerd-shim
processes should be with in reasonable limits and not lead toToo many open files
error messages.Actual behavior:
The
containerd-shim
processes are consuming a disproportionately high number ofinotify
instances, leading to theToo many open files
errorAdditional context / logs:
user.max_inotify_instances
temporarily resolved the issue, but the root cause remains thatcontainerd-shim
processes are using too manyinotify
instances.The text was updated successfully, but these errors were encountered: