You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm currently running Oncall OSS in Kubernetes cluster with Helm and it has come to my attention that Oncall engine, possibly uWSGI, is using memory more than 30GiB per pod even if I manually reduced process count from 5 to 1.
Also I tried to launch django app myself through python manage.py runserver it has gone down to 1GiB.
Oncall version is 1.1.36.
I suspect there is an issue on uWSGI side?
The text was updated successfully, but these errors were encountered:
Since 1073741816 is the default value for containerd, this is not only helm specific issue but also affecting every docker deployment with uWSGI (tested on docker compose), the fix would be migrating over gunicorn (or anything else) or hardcoding max-fd in uwsgi.ini.
Can anyone look into this issue? I'm currently manually editing oncall-engine deployment to add max-fd in uwsgi.ini every new version release. Maybe @vadimkerr?
# What this PR does
Fixes#1521 (see
unbit/uwsgi#2299)
## Checklist
- [x] Unit, integration, and e2e (if applicable) tests updated
- [x] Documentation added (or `pr:no public docs` PR label added if not
required)
- [x] `CHANGELOG.md` updated (or `pr:no changelog` PR label added if not
required)
I'm currently running Oncall OSS in Kubernetes cluster with Helm and it has come to my attention that Oncall engine, possibly uWSGI, is using memory more than 30GiB per pod even if I manually reduced process count from 5 to 1.
Also I tried to launch django app myself through
python manage.py runserver
it has gone down to 1GiB.Oncall version is 1.1.36.
I suspect there is an issue on uWSGI side?
The text was updated successfully, but these errors were encountered: