Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Increase inotify default limits #2335

Merged
merged 1 commit into from Aug 17, 2022

Conversation

stmcginnis
Copy link
Contributor

@stmcginnis stmcginnis commented Aug 11, 2022

Issue number:

Closes #1525

Description of changes:

We have had several reports from users that the inotify limits
(fs.inotify.max_user_instances and fs.inotify.max_user_watches) are too
low for their workloads, causing them to get errors when deploying pods.

The user data settings can be used to raise (or lower) these defaults if
an end user needs to fine tune their settings.

Related Amazon Linux changes:

Testing done:

Made changes and built image to make sure there were no errors.

Published AMI and spun up EKS cluster. Connected to console, went to admin container, and used
sheltie to verify values returned are what is expected for these settings.

Terms of contribution:

By submitting this pull request, I agree that this contribution is dual-licensed under the terms of both the Apache License, version 2.0, and the MIT license.

@stmcginnis
Copy link
Contributor Author

@markusboehme and @foersleo - would be great if you could weigh in on this change too. Thanks!

Copy link
Member

@markusboehme markusboehme left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see any downsides to these changes. I have to say I was surprised by the high value for the vm.max_map_count sysctl, though. I wouldn't have thought of a process needing that many VMAs, but e.g. OpenSearch recommends configuring a minimum of 256k. Aside from the typical memory-mapped files, memory allocators will use anonymous maps to request memory from the kernel. To satisfy my curiosity, it'll be interesting to see which of these are the reason for OpenSearch's recommendation.

Not blocking approval on it, but as a minor improvement I'd prefer to see the changes split into two commits. The bump in inotify resource limits is independent of the limit on the number of VMAs per process.

@stmcginnis
Copy link
Contributor Author

Thanks @markusboehme! I've dropped the vm.max_map_count from this PR and will propose a separate one to handle that. For our immediate needs, the inotify settings are probably the more important ones.

We have had several reports from users that the inotify limits
(fs.inotify.max_user_instances and fs.inotify.max_user_watches) are too
low for their workloads, causing them to get errors when deploying pods.

Also bumping up vm.max_map_count so all three settings match what is
currently used for Amazon Linux to make sure we have a consistent
experience.

The user data settings can be used to raise (or lower) these defaults if
an end user needs to fine tune their settings.

Signed-off-by: Sean McGinnis <stmcg@amazon.com>
Copy link
Contributor

@jpmcb jpmcb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good to me and seems to be what kubernetes/kops and other container focused distros set their limits to:

https://github.com/kubernetes/kops/blob/f442cc2d0a163b501c28c335ff12234f16bcb38a/nodeup/pkg/model/sysctls.go#L114-L118

Although, more generally, I am curious, why these numbers? Simply because they are sane middle grounds; not too high, not too low?

@stmcginnis
Copy link
Contributor Author

Although, more generally, I am curious, why these numbers? Simply because they are sane middle grounds; not too high, not too low?

Part middle ground so it just works for most end users without needing to make adjustments. Part just because we want to match what Amazon Linux uses as a default so the user experience is consistent if they switch between distros.

@stmcginnis stmcginnis merged commit 7555622 into bottlerocket-os:develop Aug 17, 2022
@stmcginnis stmcginnis deleted the inotify branch August 17, 2022 11:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Change default inotify settings
4 participants