Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Spike in the memory usage - Heroku #65

Closed
devenbhooshan opened this issue Jan 7, 2017 · 7 comments
Closed

Spike in the memory usage - Heroku #65

devenbhooshan opened this issue Jan 7, 2017 · 7 comments

Comments

@devenbhooshan
Copy link

Our initial Configuration : 3 daphne dynos and 6 workers having 1/2 GB memory each.

screen shot 2017-01-07 at 3 39 03 pm

We saw a sudden spike in the memory usage(198%) and we were supposed to increase the memory size to 1gb.

Not able to understand why workers needed more than 1/2 GB memory ?
What all factors should we consider while defining the size of memory required for each worker?

I saw one article in heroku about Gunicorn : https://devcenter.heroku.com/articles/optimizing-dyno-usage#basic-methodology-for-optimizing-memory

Couldn't find the same for daphne.

@andrewgodwin
Copy link
Member

Memory usage of a worker is mostly dominated by Django and the app itself, though 500MB is quite a lot. Some questions:

  • What version of Channels, Daphne, Django and Python are you using?
  • Does the same thing happen with a normal WSGI process with, say, gunicorn with similar code?
  • What sort of throughput are the workers getting?
  • What channel layer are you using?

@devenbhooshan
Copy link
Author

  • What version of Channels, Daphne, Django and Python are you using?
    channels==0.14.0
    daphne==0.12.1
    Django==1.9.5
    Python : 2.7.1
  • Does the same thing happen with a normal WSGI process with, say, gunicorn with similar code?
    Couldn't say for sure. I will try to reproduce the same load with gunicorn and will let you know
  • What sort of throughput are the workers getting?
    throughput was around 200request per minute but I can assure you it was not that much.
  • What channel layer are you using?
    Redis.

@andrewgodwin
Copy link
Member

You're on old versions of channels and daphne; I'd at least try upgrading them first as some memory leaks were fixed a while ago.

@devenbhooshan
Copy link
Author

devenbhooshan commented Jan 9, 2017 via email

@andrewgodwin
Copy link
Member

I agree the footprint is too high, I want to make sure we can isolate it to the most recent release of channels and that it's not there when you run Django in normal WSGI mode, otherwise it's not a problem I can solve!

@andrewgodwin
Copy link
Member

Closing due to lack of response.

@UbuntuEvangelist
Copy link

Deploying Python Applications with Gunicorn fix this problem for me-

pip install gunicorn
pip freeze requirements.txt

Or, you can manually add it to your requirements.txt file.

Next, revise your application’s Procfile to use Gunicorn. Here’s an example Procfile

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants