New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High memory usage if fs.nr_open
is very high and no ulimit
set on Linux systems
#2299
Comments
Does it have the same effect passing the lower count with |
@xrmx It does not, I wasn't aware of that option. I tested it with Probably the |
Yeah, the root problem is that some data structures are as a big as the number of fds available so the ill effect you have seen. |
I see, thanks for the explanation. Assuming the data structures cannot be changed, I wonder if a default limit of something like |
Fixes memory consumption of the "uWSGI http 1" process that was rising above 8 GiB on systems like Fedora 37 (locally) and Fedora CoreOS 36 (in the cloud) due to very high file descriptor limits (`fs.nr_open = 1073741816`). See <kubernetes-sigs/kind#2175> and <unbit/uwsgi#2299>. Sets the uWSGI `max-fd` value to 1048576 as per <https://github.com/kubernetes-sigs/kind/pull/1799/files>. If need be, we can make it configurable via Helm chart values later.
Fixes memory consumption of the "uWSGI http 1" process that was rising above 8 GiB on systems like Fedora 37 (locally) and Fedora CoreOS 36 (in the cloud) due to very high file descriptor limits (`fs.nr_open = 1073741816`). See <kubernetes-sigs/kind#2175> and <unbit/uwsgi#2299>. Sets the uWSGI `max-fd` value to 1048576 as per <https://github.com/kubernetes-sigs/kind/pull/1799/files>. If need be, we can make it configurable via Helm chart values later.
Just chiming in to share here since it was a result on the first page of a search query. Should help with visibility 👍 This will likely be due to a config on your system for the container runtime ( Typically This was due to systemd v240 (2018Q4) release that would raise
Anyway... for container runtimes with systemd, they'd configure Often you can configure the Just to clarify, this typically only affects the soft limit value, although some software internally raises the soft limit to the hard limit (perfectly acceptable... just As for the memory usage, from what I've read in other software affected (Java), an array is allocated sized to the soft limit set, and that used 8 bytes per element, thus for For |
# What this PR does Fixes #1521 (see unbit/uwsgi#2299) ## Checklist - [x] Unit, integration, and e2e (if applicable) tests updated - [x] Documentation added (or `pr:no public docs` PR label added if not required) - [x] `CHANGELOG.md` updated (or `pr:no changelog` PR label added if not required)
Just adding some more visibility here. This still bit me on Fedora 39's docker. For people running into this issue:
|
While debugging kubernetes-sigs/kind#2175, I tried to understand why
uwsgi
wasn't running well on a Kind cluster on Fedora 33.I came to the conclusion that it is because of a too high value for
fs.nr_open
, which defaults to1073741816
on Fedora 33, but only1048576
on Ubuntu 20.10. The very high limit causes, on my machine, the uWSGI process on a pod to consume >8Gi of memory on the--http
process, and if memory limits are set, the process will get OOM-killed by the kernel (please see issue above for a test repo and logs).The issue isn't manifested when running
uwsgi
outside a container/pod because of per-user limits set withulimit
of1024
. Also thecontainerd.service
unit seems to, by default, set a value forfs.nr_open
of1048576
, which helps avoid this issue when the container withuwsgi
is run viadocker run
.Pod logs (high limit set deliberately via
sysctl -w fs.nr_open=1073741816
):Raising the limit on an Ubuntu 20.10 machine to
1073741816
and trying again, without a container:(Note that I had to do it via
sysctl -w
andulimit -n
to raise both limits, it seems Ubuntu has a per-user limit set of1024
)Lowering the value of
fs.nr_open
to1048576
makes things work well on the pod. However, I wonder why theuwsgi
process consumes so much memory when this limit is high.Running the same
uwsgi
app without changing any limits, and without containers:Memory usage is normal in this case.
Finally, I notice that if I run
--http-socket
instead of--http
, memory usage is what I would consider "normal" (a few hundred MiB at most), but these options are not equivalent according to the documentation.The text was updated successfully, but these errors were encountered: