New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
snapcraft/commands/daemon.start: bump fs.inotify.max_user_watches #361
snapcraft/commands/daemon.start: bump fs.inotify.max_user_watches #361
Conversation
https://documentation.ubuntu.com/lxd/en/latest/reference/server_settings/#etc-sysctl-conf recommends setting `fs.inotify.max_user_watches` to `10485761` for production setup. Since we already set `fs.inotify.max_user_instances` to `1024` it means we expect a given host to at least accomodate for that many containers. However, launching ~85 containers apparently pushed LXD into consuming all the user watches: ``` $ for i in $(seq 100); do lxc launch ... ; done $ sudo systemctl reload snap.lxd.daemon Failed to allocate directory watch: Too many open files ``` Signed-off-by: Simon Deziel <simon.deziel@canonical.com>
Any risk to doing this by default in the snap? Any thoughts on why only the existing setting was included by default? |
Are the inotify entries being used caused by lxd or the guest os itself? |
AFAIU, by LXD itself (device monitor uses inotify to watch directories recursively). |
It'd be good to get an understanding of why number of watchers in LXD are associated to number of instances. Or perhaps this isn't coming from LXD, but instead watchers from the guest OS? |
yes, this counter is about number of watches which are created with |
I have tried to reproduce this behavior on my machine and have created more than 100 containers. I haven't received any errors from LXD daemon itself (and it's definitely using fanotify at least on my Ubuntu 22.04). But, interesting thing I noticed is that I can't start more than ~97 containers.
|
Ah so its not LXD using them but the containers themselves. |
@simondeziel please can you backport this to latest-candidate, 5.21-edge, 5.21-candidate, 5.0-edge thanks |
https://documentation.ubuntu.com/lxd/en/latest/reference/server_settings/#etc-sysctl-conf recommends setting
fs.inotify.max_user_watches
to10485761
for production setup. Since we already setfs.inotify.max_user_instances
to1024
it means we expect a given host to at least accomodate for that many containers. However, launching ~85 containers apparently pushed LXD into consuming all the user watches: