-
Notifications
You must be signed in to change notification settings - Fork 256
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unbounded memory consumption with redis asgi #124
Comments
Note that the redis memory increase starts out slowly but seems to grow exponentially. Also note that each siege worker should fully consume the response before making a new request, which means there's something heaping up memory use for connections that are no longer alive. I noticed that it uses a |
Well #65 never had a resolution and was apparently fixed by upgrading versions so I doubt it's that. There's meant to be channel capacity on the response channels precisely to prevent this kind of thing; could you re-run the test with your channel capacity for response channels set to 1? (You can do this with the setting |
Where exactly should this option go? I added it to the redis channel layer's dict but that seemed to have no effect. I can't find anything about a |
"cannel_capacity": {...}, in the |
I found settings in an example project and copied them, and they seemed to have an effect (siege with CHANNEL_LAYERS = {
'redis': {
'BACKEND': 'asgi_redis.RedisChannelLayer',
'ROUTING': 'memorytest.urls.channel_routing',
'CONFIG': {
'channel_capacity': {
'daphne.response*': 1,
'http.request': 1,
'http.response*': 1,
'http.disconnect': 1,
},
}
}
} |
Weird. Probably a bug in the redis layer then; if you could do your own investigation as to what keys in Redis are piling up, that would help a lot, as I won't have time to do so for a couple of weeks. |
It looks like the response body is piling up:
If I enter, for example, |
Interestingly, |
I just noticed that the |
Looking through the code, it looks like it omits to discard messages after they are read, instead leaving them to the auto-expiry (which, at its default of 60 seconds, is not enough). I must have messed this up in a refactor somewhere, let me get a patch in so you can test it. |
Moving this over to django/channels_redis#48 |
We've noticed our server running out of memory due to concurrent incoming requests outstripping Daphne's ability to keep up with response writing. This is probably related to issue #65.
This can be tested relatively easily by initializing a new Django project and then replace
urls.py
with this example:Add to
settings.py
:Then spin up redis, daphne and one worker, and hit it with for example siege:
Now, watch
redis-server
eat up more and more memory. In some cases (but not all?),daphne
also starts eating up more and more memory.This is with 100 concurrent clients. Either adding more clients or increasing the response size make the memory use grow faster.
I don't really understand what is going wrong here. It looks like the workers are producing data faster than Daphne is able to respond.
Of course, this is a very synthetic test. In reality I use
django.http.FileResponse
to stream a file of 30MB or so to tens of clients and this is causing both Daphne and Redis to use 1GB of memory before the OOM killer gets invoked.The text was updated successfully, but these errors were encountered: