Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Huge memory usage by redis when used as celery broker #2205

Closed
omerra opened this issue Aug 20, 2014 · 4 comments
Closed

Huge memory usage by redis when used as celery broker #2205

omerra opened this issue Aug 20, 2014 · 4 comments

Comments

@omerra
Copy link

omerra commented Aug 20, 2014

We are running celery 3.1.8 and kombu 3.0.20 with Redis as the broker backend.
Celery is run through heroku under new relic monitoring.

Our config is pretty simple and we don't care about results of the tasks, for this we set
CELERY_IGNORE_RESULT = True
However we noticed that our redis broker instance takes a huge amount of memory (~3GB) while the actual number of queues and messages in them is quite small at any given moment. After examining the RDB file with a memory analysis tool, we noticed there are a few huge lists:

database,type,key,size_in_bytes,encoding,num_elements,len_largest_element
0,list,"ed2d26b5-b8fb-3478-ace7-6714e8a7b4ed.reply.celery.pidbox",70733024,linkedlist,4,17696745
0,list,"e3eda502-27eb-348d-a86e-1d78fc31b165.reply.celery.pidbox",35350876,linkedlist,2,17686857
0,list,"6421b8c1-bbc0-3a59-a7ee-f26450552a60.reply.celery.pidbox",35443327,linkedlist,2,17762611
0,list,"8439dcd0-921e-3922-8504-9057b6c9834a.reply.celery.pidbox",106088780,linkedlist,6,17696745
0,list,"85c667c8-63b6-338f-b00a-e1f2cd4da143.reply.celery.pidbox",17762845,linkedlist,1,17762611
0,list,"05d0c0f2-9530-37f3-a9cb-189fc237303c.reply.celery.pidbox",106088769,linkedlist,6,17696743
0,list,"d3f200fd-c81e-3d6d-acf3-d0e9021e7e5c.reply.celery.pidbox",35431615,linkedlist,2,17762611
0,list,"7b4291c7-b916-3806-910b-c250c9a7fece.reply.celery.pidbox",88401866,linkedlist,5,17696745
0,list,"0c8b64c1-7efe-3070-b2e3-980f395b84e8.reply.celery.pidbox",123752294,linkedlist,7,17696743
0,list,"e5cf288b-8ced-3f6c-891e-34e2d302c89c.reply.celery.pidbox",70711492,linkedlist,4,17691717
0,list,"a9cafe29-204d-3d97-9b7b-322a847d0789.reply.celery.pidbox",53121613,linkedlist,3,17762611
0,list,"1c1f90ca-1fe1-35e4-a144-3a97177a674b.reply.celery.pidbox",35431683,linkedlist,2,17762611

These lists contain a few json items each containing a huge Body.

Any pointers to why these reply pidboxes are present even though we set celery to ignore result? It doesn't seem like expected behavior. We would like to get rid of these, since they are turning an instance that should take ~10mb of memory to take ~3gb of memory.

Any help would be great, let me know if you need more info on the configuration.

@ask
Copy link
Contributor

ask commented Aug 20, 2014

They are remote control replies, so you or something you use must be sending remote control requests.

Sent from my iPhone

On Aug 20, 2014, at 2:00 PM, omerra notifications@github.com wrote:

We are running celery 3.1.8 and kombu 3.0.20 with Redis as the broker backend.
Celery is run through heroku under new relic monitoring.

Our config is pretty simple and we don't care about results of the tasks, for this we set
CELERY_IGNORE_RESULT = True
However we noticed that our redis broker instance takes a huge amount of memory (~3GB) while the actual number of queues and messages in them is quite small at any given moment. After examining the RDB file with a memory analysis tool, we noticed there are a few huge lists:

database,type,key,size_in_bytes,encoding,num_elements,len_largest_element
0,list,"ed2d26b5-b8fb-3478-ace7-6714e8a7b4ed.reply.celery.pidbox",70733024,linkedlist,4,17696745
0,list,"e3eda502-27eb-348d-a86e-1d78fc31b165.reply.celery.pidbox",35350876,linkedlist,2,17686857
0,list,"6421b8c1-bbc0-3a59-a7ee-f26450552a60.reply.celery.pidbox",35443327,linkedlist,2,17762611
0,list,"8439dcd0-921e-3922-8504-9057b6c9834a.reply.celery.pidbox",106088780,linkedlist,6,17696745
0,list,"85c667c8-63b6-338f-b00a-e1f2cd4da143.reply.celery.pidbox",17762845,linkedlist,1,17762611
0,list,"05d0c0f2-9530-37f3-a9cb-189fc237303c.reply.celery.pidbox",106088769,linkedlist,6,17696743
0,list,"d3f200fd-c81e-3d6d-acf3-d0e9021e7e5c.reply.celery.pidbox",35431615,linkedlist,2,17762611
0,list,"7b4291c7-b916-3806-910b-c250c9a7fece.reply.celery.pidbox",88401866,linkedlist,5,17696745
0,list,"0c8b64c1-7efe-3070-b2e3-980f395b84e8.reply.celery.pidbox",123752294,linkedlist,7,17696743
0,list,"e5cf288b-8ced-3f6c-891e-34e2d302c89c.reply.celery.pidbox",70711492,linkedlist,4,17691717
0,list,"a9cafe29-204d-3d97-9b7b-322a847d0789.reply.celery.pidbox",53121613,linkedlist,3,17762611
0,list,"1c1f90ca-1fe1-35e4-a144-3a97177a674b.reply.celery.pidbox",35431683,linkedlist,2,17762611

These lists contain a few json items each containing a huge Body.

Any pointers to why these reply pidboxes are present even though we set celery to ignore result? It doesn't seem like expected behavior. We would like to get rid of these, since they are turning an instance that should take ~10mb of memory to take ~3gb of memory.

Any help would be great, let me know if you need more info on the configuration.


Reply to this email directly or view it on GitHub.

@omerra
Copy link
Author

omerra commented Aug 20, 2014

Hmmm, do you have any pointers for what could be sending these remote control requests? is there a way to backtrace it? We aren't using any tool like flower and are not using celeryev to monitor. Could this be due to running with new relic?

Thanks for the help

@thedrow
Copy link
Member

thedrow commented Jun 19, 2015

@omerra Try to debug this with flower and see what happens.

@ask
Copy link
Contributor

ask commented Jun 23, 2016

Closing this, as we don't have the resources to complete this task.

It may have been fixed in master, so we'll see if it comes back after 4.0 release.

@ask ask closed this as completed Jun 23, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants