-
-
Notifications
You must be signed in to change notification settings - Fork 912
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kombu 4.1.0 - Memory usage increase (leak?) on a worker when using kombu queues #844
Comments
could you try the 4.2 version from master and check if anything improve? |
Same behavior with 4.2 from master as well. Actually, it seems worse - memory is piling up rather quick. Am I missing anything in the code above, to kick off memory release from objects, connections/ queues etc? |
ask your question to mailing list/irc referencing the issue |
Addressing this issue is quite important to me - it's pretty much lines down scenario for my usecase. Switching away from kombu to something else requires quite some time so don't seem to have many options other than fixing this. Any way I can get attention of the community to this issue? Any help in addressing this issue will be greatly appreciated. |
could you try rabbitmq? that might be a temporary solution |
I've tried the same with rabbitmq as the backend. Saw the same behavior in that scenario too. Seems that the issue might not be in the backend specific implementation. |
try to find out thee leak, also I would suggest to install all the dependencies of celery from master branch to verify on master |
I am experiencing the same issue. the issue disappears when you don't use timeout and keep blocking. |
@auvipy Nice find, it seems like this could definitely be related to: celery/celery#4843 (comment) |
This may be fixed by: #1476 |
ok merged, lets see |
Hi,
I have implemented a worker using Kombu's SimpleQueue. The implementation is as given below. When I run this worker for a few hours on a Ubuntu 16.04 system with redis as the backend, I notice a gradual memory build up on the process. When I run this worker for over a day, it ends up consuming all memory on the system and the system ends up being unusable, until the worker is killed.
On redis server, I have it configured with a timeout set to 5 seconds and a tcp_keepalive set to 60 seconds.
Worker Code:
Here's a plot of free memory on the system:
What is going wrong here? Did I miss anything in the implementation?
Any help here will be greatly appreciated.
The text was updated successfully, but these errors were encountered: