Replies: 2 comments 10 replies
-
|
Why are you concerned about worker restarts? |
Beta Was this translation helpful? Give feedback.
-
|
Memory leak after 100 requests?? That is very low limit. Is this good idea to separate metrics from app: it depends, with higher traffic I would definitely go for that, then if something is wrong with app, metrics are still alive and vice-versa. And be aware its very bad idea to reset processes every 100 reqs using prometheus-client, note every created process writes to new file and you shortly end up with 1000000 files with metrics in mutli proc dir and it takes 10s for one scrape :P, I hardly recommend following discussion in their issues - starting prometheus/client_python#275 and there are many more in past years. Much better idea is to kill processes when they reach memory limit. In Samsung (SGG) where I work in one of our projects we use in-memory SQLites and some other YDD stuff that increase memory infinitely and it is problem after some time, of course it's all to be rewritten some day, but for now in uwsgi https://uwsgi-docs-additions.readthedocs.io/en/latest/Options.html#process-management for example you have both |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I am currently using the sub application mount to have both the business service application and a separate prometheus endpoint.
I really like to run it on a separate port have a different worker pool for it.
When using gunicorn with
uvicorn.workers.UvicornWorkerclass we can set themax_requestto a number greater than 0 so that worker process is restarted after that number. This help limit any memory leak.Since I am using sub application mount, they run as apart of the same worker(s) process and share the same port. Everytime prometheus scraps for metrics, the request get considered against the max_request.
For example, if the
max_requestis set to 100 and prometheus scrapper is set a period of 10 seconds. Then in 1000 seconds the worker process is restarted.If prometheus sub application can run on a different port then I can set this
max_request=0so it won't restart the worker that is serving the prometheus app.Has only in the community tried something like this?
Beta Was this translation helpful? Give feedback.
All reactions