-
Notifications
You must be signed in to change notification settings - Fork 255
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Spike in the memory usage - Heroku #65
Comments
Memory usage of a worker is mostly dominated by Django and the app itself, though 500MB is quite a lot. Some questions:
|
|
You're on old versions of channels and daphne; I'd at least try upgrading them first as some memory leaks were fixed a while ago. |
Ok..will do that and if the problem still there, will update you. But even
otherwise the memory footprint is too high..
On 9 Jan 2017 02:55, "Andrew Godwin" <notifications@github.com> wrote:
You're on old versions of channels and daphne; I'd at least try upgrading
them first as some memory leaks were fixed a while ago.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#65 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/ACTM4PsEbJltXuTbd0-O8bZWMjfhCRX6ks5rQVQzgaJpZM4LdYsp>
.
|
I agree the footprint is too high, I want to make sure we can isolate it to the most recent release of channels and that it's not there when you run Django in normal WSGI mode, otherwise it's not a problem I can solve! |
Closing due to lack of response. |
Deploying Python Applications with Gunicorn fix this problem for me- pip install gunicorn Or, you can manually add it to your requirements.txt file. Next, revise your application’s Procfile to use Gunicorn. Here’s an example Procfile |
Our initial Configuration : 3 daphne dynos and 6 workers having 1/2 GB memory each.
![screen shot 2017-01-07 at 3 39 03 pm](https://cloud.githubusercontent.com/assets/2411744/21740976/9cc9f9b6-d4ef-11e6-87b9-8941df3e08cc.png)
We saw a sudden spike in the memory usage(198%) and we were supposed to increase the memory size to 1gb.Not able to understand why workers needed more than 1/2 GB memory ?
What all factors should we consider while defining the size of memory required for each worker?
I saw one article in heroku about Gunicorn : https://devcenter.heroku.com/articles/optimizing-dyno-usage#basic-methodology-for-optimizing-memory
Couldn't find the same for daphne.
The text was updated successfully, but these errors were encountered: