Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

can't get result with rabbitmq on win7 #2146

Closed
JayceM6 opened this issue Jul 14, 2014 · 17 comments
Closed

can't get result with rabbitmq on win7 #2146

JayceM6 opened this issue Jul 14, 2014 · 17 comments

Comments

@JayceM6
Copy link

JayceM6 commented Jul 14, 2014

celery 3.1.13, win7 x64 sp1, python2.7.8, rabbitmq 3.3.4
First Steps with Celery, simplest sample
ready() always return False any time, but worker log is right, I can see the task being executed in the workder.

@ask
Copy link
Contributor

ask commented Aug 15, 2014

Some have reported this but I have not been able to reproduce.

Are you sure the worker is configured with the same result backend?

Does it show the correct result backend in the banner at startup?

Are you sure you don't have other old workers running that is accepting the tasks?

@cspears2002
Copy link

This worked for me:
C:\Python27\Scripts\celery.exe -A messaging.tasks worker --loglevel=info --pool=solo

The --pool=solo flag seems to be the key. Wrote about this in Stack Overflow: http://stackoverflow.com/questions/26636033/trouble-getting-result-from-celery-queue

Not sure why though.

@sshinault
Copy link

I have a problem I believe is related - see my comments in #2344 - they may shed some light.

something is broken with multiprocessing pool on Windows. Having to use --pool=solo is not a good long term solution for scalability on Windows.

You should be able to reproduce by using non-local broker and backend (it may not matter whether amqp, redis, etc. as it looks like the subprocesses don't get the correct config and default to amqp://localhost.

@timson
Copy link

timson commented Dec 22, 2014

I have the same issue. Windows 7 32-bit, Python 2.7.9. Celery 3.1.17 installed in virtualenv, and local redis for broker and backend. And I try to run the example from the quickstart with tasks.add, and task always in PENDING state. The same code, works fine under linux. If I start celery worker with --pool=solo it also work fine under my Windows 7.

@dewyatt
Copy link

dewyatt commented Mar 17, 2015

I had this issue today when going through the First Steps tutorial.
celery@... v3.1.17 (Cipater)
Windows-7-6.1.7601-SP1
(x86_64)
RabbitMQ 3.5.0

AsyncResult.ready() was always false, and get() always timed out, though Celery showed the result was ready immediately (a simple add()).
Running with --pool=solo did indeed make this work.

@jerzyk
Copy link

jerzyk commented Mar 27, 2015

I was able to repeat this behavior on OS-X (10.10.3) celery 3.1.17, redis 2.8.19,
when one of the tasks from group head failed celery.chord_unlock kept running in PENDING state
(value of CELERY_CHORD_PROPAGATES did not change this)

with "--pool=solo" chord_unlock finished and there were no issues at all

similiar things happened with chord, but I do not have a scenario

@rico-suave
Copy link

We had the same problem and traced it back to a change in Celery 3.1.13.
We are executing a task that in turn starts another task using delay(), this works on Celery 3.1.12, but not on 3.1.13.

Code example that reproduces the problem (on Windows 7):

@celery.task()
def task1():
    logger = logging.getLogger('tasktest')
    logger.error('task1 was started')
    print "task1"
    logger.error('going to start task2')
    gmtasks.tasks.task2.delay()
    logger.error('task1 is finished')


@celery.task()
def task2():
    logger = logging.getLogger('tasktest')
    logger.error('task2 was started')
    print "task2"
    logger.error('task2 is finished')

The result of callling `task1.delay()' is:

Celery 3.1.12
[2015-07-20 15:30:10,696: INFO/MainProcess] Connected to amqp://guest:**@127.0.0.1:5672/gmtasks
[2015-07-20 15:30:10,736: INFO/MainProcess] mingle: searching for neighbors
[2015-07-20 15:30:11,803: INFO/MainProcess] mingle: all alone
[2015-07-20 15:30:11,878: WARNING/MainProcess] celery@S2-COMPUTER8786 ready.
[2015-07-20 15:30:20,267: INFO/MainProcess] Received task: gmtasks.tasks.task1[c3392b60-8407-4e47-ade3-8e508585d6bc]
[2015-07-20 15:30:20,269: ERROR/Worker-1] task1 was started
[2015-07-20 15:30:20,276: WARNING/Worker-1] task1
[2015-07-20 15:30:20,276: ERROR/Worker-1] going to start task2
[2015-07-20 15:30:20,309: INFO/MainProcess] Received task: gmtasks.tasks.task2[f6569c50-c4d4-48c6-b78a-bcfd099cb6f3]
[2015-07-20 15:30:20,309: ERROR/Worker-1] task1 is finished
[2015-07-20 15:30:20,321: INFO/MainProcess] Task gmtasks.tasks.task1[c3392b60-8407-4e47-ade3-8e508585d6bc] succeeded in 0.050999879837s: None
[2015-07-20 15:30:20,313: ERROR/Worker-1] task2 was started
[2015-07-20 15:30:20,325: WARNING/Worker-1] task2
[2015-07-20 15:30:20,325: ERROR/Worker-1] task2 is finished
[2015-07-20 15:30:20,365: INFO/MainProcess] Task gmtasks.tasks.task2[f6569c50-c4d4-48c6-b78a-bcfd099cb6f3] succeeded in 0.0510001182556s: None

Celery 3.1.13+
[2015-07-20 15:28:49,584: INFO/MainProcess] Connected to amqp://guest:**@127.0.0.1:5672/gmtasks
[2015-07-20 15:28:49,604: INFO/MainProcess] mingle: searching for neighbors
[2015-07-20 15:28:50,651: INFO/MainProcess] mingle: all alone
[2015-07-20 15:28:50,716: WARNING/MainProcess] celery@S2-COMPUTER8786 ready.
[2015-07-20 15:29:05,413: INFO/MainProcess] Received task: gmtasks.tasks.task1[32bccea6-e522-4ff4-8d4e-616f8851bea5]
[2015-07-20 15:29:05,414: ERROR/Worker-1] task1 was started
[2015-07-20 15:29:05,423: WARNING/Worker-1] task1
[2015-07-20 15:29:05,423: ERROR/Worker-1] going to start task2
[2015-07-20 15:29:05,448: ERROR/Worker-1] task1 is finished
[2015-07-20 15:29:05,459: INFO/MainProcess] Task gmtasks.tasks.task1[32bccea6-e522-4ff4-8d4e-616f8851bea5] succeeded in 0.0450000762939s: None

If we look at the queue the subsequent tasks (task2 in the example) seem to be submitted to the queue, but not picked up by a worker process.

Using --pool=solo works around the problem, but (as @sshinault mentioned, is not a solution).
We are testing this on Windows 7 (64 bit)

Here is the result of celery report:

software -> celery:3.1.18 (Cipater) kombu:3.0.26 py:2.7.4
            billiard:3.3.0.20 py-amqp:1.4.6
platform -> system:Windows arch:32bit, WindowsPE imp:CPython
loader   -> celery.loaders.default.Loader
settings -> transport:amqp results:disabled

@necklinux
Copy link

I have the same issue on Raspberry pi 2.
Linux version 3.18.7-v7+ (dc4@dc4-XPS13-9333) (gcc version 4.8.3 20140303 (prerelease) (crosstool-NG linaro-1.13.1+bzr2650 - Linaro GCC 2014.03) ) #755 SMP PREEMPT Thu Feb 12 17:20:48 GMT 2015
celery version:3.1.18 (Cipater)
RabbitMQ version: "2.8.4"

celery -A tasks worker --loglevel=info --pool=solo solve my problem.
but i kill celery and run command as below:
celery -A tasks worker --loglevel=info
I can get the result :'SUCCESS' with result.status.
Wow,I confused.

@manyan
Copy link

manyan commented Sep 14, 2015

same issue here
OS: Mac 10.10.5
python: 2.7
Celery: 3.1.18
rabbitmq: 3.5.4

Steps:

  1. First time: I started Celery as: elery -A tasks worker --loglevel=info
    the tasks is always pending, the worker log looks normal

  2. kill the worker: start it again with --pool=solo, works perfectly

  3. kill above worker, start it without --pool=solo, still works

conclusion: need to trigger --pool=solo for the first time, but after the signal, its all ok with or without --pool=solo.
I guess --pool=solo might trigger some config or whatever

Cheers

@starplanet
Copy link

I have the problem on windows 7 too.
use '--pool=solo' works for me. But I want to know why. Does celery newest version solve this problem?

@ask
Copy link
Contributor

ask commented Dec 29, 2015

Don't use Celery(backend='...'), use app.config_from_object('proj.celeryconfig') where proj/celeryconfig.py is a module including CELERY_RESULT_BACKEND = '...'

@floens
Copy link

floens commented Jan 1, 2016

Setting CELERY_RESULT_BACKEND fixed the issue. The documentation that tells us to use Celery(backend='...') should probably be updated.

@yaoelvon
Copy link

Firstly, thanks. But I want to know why?

@zjost
Copy link

zjost commented Apr 21, 2016

I have same problem using Windows 7 also. Can also confirm that running with --pool=solo changes the behavior.

@RudolfCardinal
Copy link

RudolfCardinal commented May 12, 2016

Similar problems with Windows 10 (10-10.0.10586), Celery 3.1.23, Python 3.4, RabbitMQ 3.6.1.
Without "--pool=solo", tasks enter the Reserved list and are not Executed. With it, tasks are executed.
The exact same code works fine under Linux.
It seems independent of CELERY_RESULT_BACKEND: if that is not set, obviously no results are received, but similarly the task is not executed either (it goes into the reserved list). The "-Ofair" option didn't help either. Neither did "--concurrency=4" versus "--concurrency=1" (except that concurrency higher than 1 appears to be required to get even things like "celery -A myapp status" to work).
Only the "--pool=solo" option actually made the tasks get executed.

@ask
Copy link
Contributor

ask commented Jun 23, 2016

Closing this, as we don't have the resources to support Windows

@ignaciofite
Copy link

Hello, sorry to comment on this closed issue but this is still happening as of today... Found the fix via https://stackoverflow.com/questions/25495613/celery-getting-started-not-able-to-retrieve-results-always-pending

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests