New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Delay memory leak #3279
Comments
What version? |
kombu==3.0.34 |
Probably have to wait for gc.collect first? |
but why 650 mb after 30 000 calls of delay? |
I don't know if its related but i have been having some pretty nasty memory leakage in 3.1.23 Redis backend if that helps. |
Have been experiencing something similar. We are using SQS and are sending tasks to the queue using What we were seeing is basically memory gradually increasing and a huge drop after some random time or after the task got killed because no more memory was available: We use: |
Just wanted to follow up on this. We've managed to solve our problem ... we didn't increase CPU on our instance (:facepalm:) and we've also made some changes to the worker so that it processes some things in several chunks. |
@yeago @ask @ztsv I've been experiencing similar issues with a Django app (deployed via Heroku) with redis backend, with versions: I have a task on a process that executes at a rate of approximately 1000 tasks per minute. I've resigned to restarting this particular worker queue every 3 hours to prevent exceeding memory quotas. Do any of you have suggestions for debugging this problem? It's persisted for a while but I'm getting the point where restarting the worker isn't going to be feasible. Any help is greatly appreciated! |
Note that I've tried using |
Ref #3339 |
@vesterbaek still haven't resolved on my end. I'm planning on upgrading to Celery 4 sometime in the next few months to see if that fixes it. For now I've resigned myself to just having a cron that restarts my Celery workers every 3 hours to reset the memory consumption. |
Ok, thanks for getting back. I did upgrade to Celery 4 - unfortunately with no improvement in this area. |
Just noticed this today. On 4.1.0 with rabbit. Running a bunch of jobs and ran out of (2TB of) memory in a few hours... |
@danqing @vesterbaek Do you have a test case that can reproduce this issue? |
@thedrow Unfortunately not. I've been trying to provoke this in development -- but have not found a good way to achieve this. On production - when it happens - is hard for me because this is running on Heroku. For now, I'm mitigating the problem by monitoring Heroku logs for R14 (OOM) and restarting the workers when this happens. |
probably you could try latest master and check #3339 this issue |
My colleagues at my work were getting this problem and they solved by switching to rq. The |
I have celery task:
And i got leak about 650mb.
The text was updated successfully, but these errors were encountered: