-
Notifications
You must be signed in to change notification settings - Fork 203
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tasks stuck in ready queue #99
Comments
You need to set up a cleaner to return unacked deliveries of stale connections back to the ready list. See https://github.com/adjust/rmq#cleaner |
I believe it is set up correctly, as we've had it setup on many other services which behave fine in the same way. This is the way we have it setup on all services.
I'm not sure if this is the proper way of calling cleaner or we have some other issues. But also, why would |
Could it be that many of your old pods are still running? So that those connections would still be active? Also, could you start a queue handler and show me the overview? See https://github.com/adjust/rmq/blob/master/example/handler/main.go |
@pdamir: Did you get it solved? |
Our main issue was due to our calls to Elasticsearch update method which failed and caused the tasks to be rejected, we've managed to fix it and there are no more issues. We have noticed that there is a single older queue with 19 ready items which never get consumed, but when we add them to the queue, new one is created and they are consumed. We have just removed the older queue key from Redis manually. |
Glad to hear, thanks for the update! |
Hello! I've have recently started running into problems and I honestly I don't know where to look anymore. Our ElastiCache Redis node has been having increased memory usage and even reached 100% and crashed a few services due to tasks stuck in unacked/ready state, such as
found so far 'rmq::connection::automation.order-JpP1pZ::queue::[automation.order]::unacked' with 102513 items
found so far 'rmq::connection::personification.lastviewed-Crt5kY::queue::[personification.lastviewed]::ready' with 415308 items
It seems that every time the k8s pods of the workers that handle these queues are restarted (manually or after a new build), tasks keep piling up on some queues while others are normally processed.
Cleaner is run every time pod is started, as we see the first print statement below, but not the print statement for error.
I can add more functions from our code if needed, but at this point any advice or pointer is very much appreciated
The text was updated successfully, but these errors were encountered: