You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I ran k3s on an ubuntu (cgroup2-supported) machine, I encountered an error message of Too many open files. After investigation, the node's default user.max_inotify_instances = 128. After I increased this parameter, the cluster worked normally. via for foo in /proc/*/fd/*; do readlink -f $foo; done | grep inotify | cut -d/ -f3 | xargs -I '{}' -- ps --no-headers -o comm {} | sort | uniq -c | sort -nr command and see that containerd-shim takes up too many inotify instances. This issue was fixed in #6498, but it doesn’t seem to take effect
containerd-shim creates many inotify instances Steps To Reproduce:
Installed K3s:
Check the installed cgroup and make sure the current node supports cgroup2
root@iZwz9hd425x7nlxrle120jZ:~# grep cgroup /proc/filesystems
nodev cgroup
nodev cgroup2
When the number of pods in the cluster is large, you will encounter the error Too many open files.
At this time, pass for foo in /proc/*/fd/*; do readlink -f $foo; done | grep inotify | cut -d/ -f3 | xargs -I '{}' -- ps --no-headers -o comm {} | sort | uniq -c | sort -nr command, you can get the following output
Expected behavior:
[root@iZwz9bpqft4yn267v49so6Z ~]# k3s --version
k3s version v1.25.12+k3s1 (7515237)
go version go1.20.6
[root@iZwz9bpqft4yn267v49so6Z ~]# uname -a
Linux iZwz9bpqft4yn267v49so6Z 3.10.0-1160.105.1.el7.x86_64 #1 SMP Thu Dec 7 15:39:45 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
[root@iZwz9bpqft4yn267v49so6Z ~]# grep cgroup /proc/filesystems
nodev cgroup
This is the result of running on centos (cgroup only)
Environmental Info:
K3s Version: root@iZwz9hd425x7nlxrle120jZ:~# k3s --version
k3s version v1.29.1+k3s2 (57482a1)
go version go1.21.6
Node(s) CPU architecture, OS, and Version:
Linux iZwz9hd425x7nlxrle120jZ 5.15.0-71-generic #78-Ubuntu SMP Tue Apr 18 09:00:29 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Cluster Configuration:
single,all in one node
Containerd Version:
root@iZwz9hd425x7nlxrle120jZ:~# /var/lib/rancher/k3s/data/current/bin/containerd --version
containerd github.com/k3s-io/containerd v1.7.11-k3s2
Describe the bug:
When I ran k3s on an ubuntu (cgroup2-supported) machine, I encountered an error message of Too many open files. After investigation, the node's default user.max_inotify_instances = 128. After I increased this parameter, the cluster worked normally. via
for foo in /proc/*/fd/*; do readlink -f $foo; done | grep inotify | cut -d/ -f3 | xargs -I '{}' -- ps --no-headers -o comm {} | sort | uniq -c | sort -nr
command and see thatcontainerd-shim
takes up too many inotify instances. This issue was fixed in #6498, but it doesn’t seem to take effectcontainerd-shim creates many inotify instances
Steps To Reproduce:
root@iZwz9hd425x7nlxrle120jZ:~# grep cgroup /proc/filesystems
nodev cgroup
nodev cgroup2
Too many open files
.At this time, pass
for foo in /proc/*/fd/*; do readlink -f $foo; done | grep inotify | cut -d/ -f3 | xargs -I '{}' -- ps --no-headers -o comm {} | sort | uniq -c | sort -nr
command, you can get the following outputExpected behavior:
[root@iZwz9bpqft4yn267v49so6Z ~]# k3s --version
k3s version v1.25.12+k3s1 (7515237)
go version go1.20.6
[root@iZwz9bpqft4yn267v49so6Z ~]# uname -a
Linux iZwz9bpqft4yn267v49so6Z 3.10.0-1160.105.1.el7.x86_64 #1 SMP Thu Dec 7 15:39:45 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
[root@iZwz9bpqft4yn267v49so6Z ~]# grep cgroup /proc/filesystems
nodev cgroup
This is the result of running on centos (cgroup only)
Actual behavior:
This is the result of running on ubuntu (supports cgroup2)
Additional context / logs:
The text was updated successfully, but these errors were encountered: