New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rpc backend doesn't seem to report message status correctly. #4084
Comments
I've been able to reproduce this issue with:
but it fails when I execute a group task before individual tasks. Anyway, GroupResult does not return ready even if all subtasks are ready. When I call GroupResult.get it marks GroupResult as ready.
No matter what queue you define or start using celery command, it starts failing once a group task is executed |
After digging a while, i've discovered that this problem is reproduced when using rpc as results_backend instead of amqp. Maybe rpc results backend is broken? |
This sounds like a bug. Thanks for the report. |
I've tried against 4.0.2, 4.1.0 and master, and for all of them, rpc backend do not update the tasks status correctly. |
I have also observed something similar to this issue, where despite a worker being configure to send a STARTED message (via Currently I have traced the issue to this piece of code in def on_state_change(self, meta, message):
if self.on_message:
self.on_message(meta)
if meta['status'] in states.READY_STATES: ### <===== HERE
task_id = meta['task_id']
try:
result = self._get_pending_result(task_id)
except KeyError:
# send to buffer in case we received this result
# before it was added to _pending_results.
self._pending_messages.put(task_id, meta)
else:
result._maybe_set_cache(meta)
buckets = self.buckets
try:
# remove bucket for this result, since it's fulfilled
bucket = buckets.pop(result)
except KeyError:
pass
else:
# send to waiter via bucket
bucket.append(result)
sleep(0) I have confirmed that the |
could you also check latest master and report? |
Somebody reported this as an issue on IRC. |
Another update while I was testing this. Apparently it is working for single tasks but the same problem arises when using groups. >>> res = group(add.s(3,4), add.s(5,7)).delay()
>>> for r in res.results:
... print(r.state)
PENDING
PENDING This might have to be a new issue... |
I'm having a similar issue. I have a group of task executing correctly when using the AMQP backend. When switching to RPC, the group task never finishes and blocks on the |
I encountered an issue that might be related, in version 4.3.0. I have a simple celery application with two tasks, a_func() and b_func(). After starting the celery worker, I am calling a_func.apply_async(), and a_func, when running on worker is calling b_func.apply_async(). When using 'amqp://' as a backend everything is working well. However, when using 'rpc://' as a backend, I am having problems. I am trying to get the state and the return value of the tasks. For the a_func() task, there is no problem. However for b_func() I am getting state = 'PENDING' forever, and get() is stuck forever. I am using: celery version 4.3.0. rabbitmq version 3.5.7 as broker. python 2.7. ubuntu version 16.0.4 LTS. Worker cmd: a_func and b_func tasks: |
+1. This is probably due to this issue : #4830 which has a fix in the works. |
Just encountered with this issue and i noticed the issue is still opened. any news ? |
It may be a limitation of the RPC backend since it doesn't store state. I think this issue might only be fixed after our NextGen architecture refactoring. |
on my rabbitmq-server the task shows completed. however I always get pending and even if I get() my service freezes. The interesting thing is that locally it works, but when my service is sending tasks to a rabbitmq on another server I start getting these problems. I'm thinking about replacing the backend |
I faced the same issue today on celery version 5.1.2 Everything works fine on my windows PC but when I deployed the code on Linux server I found out that use celery version 5.0.0 can solve the issue on my Linux server. Hope this information can help you guys debug the issue. |
then it might be a regression, can you try 5.2.0rc1 and report back |
I tried with celery version 5.2.0rc1 and still faced the same problem. |
thanks for clarifying |
Hi, When a single task added at a time, it works fine. Any fixes or updates? Thanks in advance. |
Hi, |
Same problem here. Will try to switch to Redis, but this adds a layer of complexity |
Same issue. Get the same taskId (which already completed) returns different result.
Seems problems with how flask is running:
However if the server is started, it will occur the problem
also if set
|
Hey. Been meaning to switch to rpc backend for more support for monitoring purposes like Flower. Is it still not safe to upgrade to 5.x and use rpc backend? (Currently using amqp backend but that's deprecated in 5.x) |
Same problem tasks statuses are always pending and |
Same issue here using:
Seems this issue has been lurking for years, I went down the RabbitMQ hole as it seemed to be the recommended celery backend at the time but now it seems the solution is to switch over to another backend like Redis? |
The RPC backend (using RabbitMQ) seems to have a bug that hasn't been fixed in 4 years celery/celery#4084 where the states of the tasks aren't properly updated. This makes redirecting the results page difficult if not impossible. Using Redis, we do not encounter these issues.
This is version 4.0.2
If I use rabbitmq, the rpc result backend, and a custom queue the message's status never seems to change and stays 'PENDING', even if the logging in the worker reports that it has been successfully executed until I do a get() of some sorts when the status changes to 'SUCCESS'. When I change the backend to amqp this the system works as expected, giving SUCCESS before doing the get(). Also the redis backend doesn't give this problem, so it seems to be rpc specific.
Note that not setting a custom queue, so using the default also works as expected just like the other backends!
I've got in my tasks.py:
I start this with `celery -A tasks worker --loglevel=info -Q myqueue
On the other side I do:
If I start this qithout the -Q option: 'celery -A tasks worker --loglevel=info' I get:
The text was updated successfully, but these errors were encountered: