You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While using copy-on-write to start up a bunch of wsgi workers, use a ton of CPU and memory whenever I need to restart the main process. This causes OOM errors and I'd like to avoid it. I've tested in uwsgi and gunicorn and I see the same behavior so there might be a common issue here.
I have a really simple Flask app that you can use to test this:
from flask import Flask
application = Flask(__name__)
my_data = {"data{0}".format(i): "value{0}".format(i) for i in range(2000000)}
@application.route("/")
def index():
return "I have {0} data items totalling {1} characters".format(
len(my_data), sum(len(k) + len(v) for k, v in my_data.items()))
You can start the app with either of the following command:
When I do ^C on the main process in my terminal and track the "free" KiB Mem reported by top, that's when I see the huge drop in available memory and the spike in CPU usage. Note that there is no change in memory usage reported for each worker. Is there a way to safely restart uwsgi so that this memory and CPU spike doesn't happen?
Steps to reproduce:
Set up app.py as described above
Run either gunicorn or uwsgi with the arguments provided above.
Observe free memory and CPU usage (using top)
5.7GB free on my machine before startup
5.3GB free on my machine after startup
Ctrl-C on the main gunicorn/uwsgi process
1.3GB free while processes are shutting down (and CPU usage spikes)
5.7GB free after all processes actually shut down (2-5 seconds later)
The text was updated successfully, but these errors were encountered:
It is pretty hard to give an answer as this is dependent on various factors.
My bet is that when you shutdown the workers, the PyObject c structure of each python object is 'touched' forcing the 'copy part' of the COW behaviour.
You should check if this is the true cause by adding the --skip-atexit-teardown option that will skip the Py_Finalize() call
Aha! When I add --skip-atexit-teardown, uwsgi appears to shut down immediately with no memory spike. Is this a "safe" way to do a shutdown? We don't have any side-effects that I'm worried about but I would like to minimize failed requests when we do a restart.
While using copy-on-write to start up a bunch of wsgi workers, use a ton of CPU and memory whenever I need to restart the main process. This causes OOM errors and I'd like to avoid it. I've tested in uwsgi and gunicorn and I see the same behavior so there might be a common issue here.
For more details on reproducing the issue, see:
https://stackoverflow.com/questions/61130651/memory-available-free-plummets-and-cpu-spikes-when-shutting-down-uwsgi-gunicor
Test environment
I have a really simple Flask app that you can use to test this:
You can start the app with either of the following command:
When I do ^C on the main process in my terminal and track the "free" KiB Mem reported by top, that's when I see the huge drop in available memory and the spike in CPU usage. Note that there is no change in memory usage reported for each worker. Is there a way to safely restart uwsgi so that this memory and CPU spike doesn't happen?
Steps to reproduce:
The text was updated successfully, but these errors were encountered: