New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
can't get result with rabbitmq on win7 #2146
Comments
Some have reported this but I have not been able to reproduce. Are you sure the worker is configured with the same result backend? Does it show the correct result backend in the banner at startup? Are you sure you don't have other old workers running that is accepting the tasks? |
This worked for me: The --pool=solo flag seems to be the key. Wrote about this in Stack Overflow: http://stackoverflow.com/questions/26636033/trouble-getting-result-from-celery-queue Not sure why though. |
I have a problem I believe is related - see my comments in #2344 - they may shed some light. something is broken with multiprocessing pool on Windows. Having to use --pool=solo is not a good long term solution for scalability on Windows. You should be able to reproduce by using non-local broker and backend (it may not matter whether amqp, redis, etc. as it looks like the subprocesses don't get the correct config and default to amqp://localhost. |
I have the same issue. Windows 7 32-bit, Python 2.7.9. Celery 3.1.17 installed in virtualenv, and local redis for broker and backend. And I try to run the example from the quickstart with tasks.add, and task always in PENDING state. The same code, works fine under linux. If I start celery worker with --pool=solo it also work fine under my Windows 7. |
I had this issue today when going through the First Steps tutorial. AsyncResult.ready() was always false, and get() always timed out, though Celery showed the result was ready immediately (a simple add()). |
I was able to repeat this behavior on OS-X (10.10.3) celery 3.1.17, redis 2.8.19, with "--pool=solo" chord_unlock finished and there were no issues at all similiar things happened with chord, but I do not have a scenario |
We had the same problem and traced it back to a change in Celery 3.1.13. Code example that reproduces the problem (on Windows 7):
The result of callling `task1.delay()' is: Celery 3.1.12 Celery 3.1.13+ If we look at the queue the subsequent tasks (task2 in the example) seem to be submitted to the queue, but not picked up by a worker process. Using --pool=solo works around the problem, but (as @sshinault mentioned, is not a solution). Here is the result of celery report:
|
I have the same issue on Raspberry pi 2. celery -A tasks worker --loglevel=info --pool=solo solve my problem. |
same issue here Steps:
conclusion: need to trigger --pool=solo for the first time, but after the signal, its all ok with or without --pool=solo. Cheers |
I have the problem on windows 7 too. |
Don't use |
Setting |
Firstly, thanks. But I want to know why? |
I have same problem using Windows 7 also. Can also confirm that running with |
Similar problems with Windows 10 (10-10.0.10586), Celery 3.1.23, Python 3.4, RabbitMQ 3.6.1. |
Closing this, as we don't have the resources to support Windows |
Hello, sorry to comment on this closed issue but this is still happening as of today... Found the fix via https://stackoverflow.com/questions/25495613/celery-getting-started-not-able-to-retrieve-results-always-pending |
celery 3.1.13, win7 x64 sp1, python2.7.8, rabbitmq 3.3.4
First Steps with Celery, simplest sample
ready() always return False any time, but worker log is right, I can see the task being executed in the workder.
The text was updated successfully, but these errors were encountered: