New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to run tasks under Windows #4081

Open
SPKorhonen opened this Issue Jun 8, 2017 · 8 comments

Comments

Projects
None yet
8 participants
@SPKorhonen

SPKorhonen commented Jun 8, 2017

Celery 4.x starts (with fixes #4078) but all tasks crash

Steps to reproduce

Use First Steps tutorial (http://docs.celeryproject.org/en/latest/getting-started/first-steps-with-celery.html)

celery -A tasks worker --loglevel=info
add.delay(2,2)

Expected behavior

Task is executed and a result of 4 is produced

Actual behavior

Celery crashes.

"C:\Program Files\Python36\Scripts\celery.exe" -A perse.celery worker -l info

-------------- celery@PETRUS v4.0.2 (latentcall)
---- **** -----
--- * *** * -- Windows-10-10.0.14393-SP0 2017-06-08 15:31:22
-- * - **** ---

  • ** ---------- [config]
  • ** ---------- .> app: perse:0x24eecc088d0
  • ** ---------- .> transport: amqp://guest:**@localhost:5672//
  • ** ---------- .> results: rpc://
  • *** --- * --- .> concurrency: 12 (prefork)
    -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
    --- ***** -----
    -------------- [queues]
    .> celery exchange=celery(direct) key=celery

[tasks]
. perse.tasks.celery_add

[2017-06-08 15:31:22,685: INFO/MainProcess] Connected to amqp://guest:**@127.0.0.1:5672//
[2017-06-08 15:31:22,703: INFO/MainProcess] mingle: searching for neighbors
[2017-06-08 15:31:23,202: INFO/SpawnPoolWorker-5] child process 5124 calling self.run()
[2017-06-08 15:31:23,207: INFO/SpawnPoolWorker-4] child process 10848 calling self.run()
[2017-06-08 15:31:23,208: INFO/SpawnPoolWorker-10] child process 5296 calling self.run()
[2017-06-08 15:31:23,214: INFO/SpawnPoolWorker-1] child process 5752 calling self.run()
[2017-06-08 15:31:23,218: INFO/SpawnPoolWorker-3] child process 11868 calling self.run()
[2017-06-08 15:31:23,226: INFO/SpawnPoolWorker-11] child process 9544 calling self.run()
[2017-06-08 15:31:23,227: INFO/SpawnPoolWorker-6] child process 16332 calling self.run()
[2017-06-08 15:31:23,229: INFO/SpawnPoolWorker-8] child process 3384 calling self.run()
[2017-06-08 15:31:23,234: INFO/SpawnPoolWorker-12] child process 8020 calling self.run()
[2017-06-08 15:31:23,241: INFO/SpawnPoolWorker-9] child process 15612 calling self.run()
[2017-06-08 15:31:23,243: INFO/SpawnPoolWorker-7] child process 9896 calling self.run()
[2017-06-08 15:31:23,245: INFO/SpawnPoolWorker-2] child process 260 calling self.run()
[2017-06-08 15:31:23,730: INFO/MainProcess] mingle: all alone
[2017-06-08 15:31:23,747: INFO/MainProcess] celery@PETRUS ready.
[2017-06-08 15:31:49,412: INFO/MainProcess] Received task: perse.tasks.celery_add[524d788e-e024-493d-9ed9-4b009315fea3]
[2017-06-08 15:31:49,416: ERROR/MainProcess] Task handler raised error: ValueError('not enough values to unpack (expected 3, got 0)',)
Traceback (most recent call last):
File "c:\program files\python36\lib\site-packages\billiard\pool.py", line 359, in workloop
result = (True, prepare_result(fun(*args, **kwargs)))
File "c:\program files\python36\lib\site-packages\celery\app\trace.py", line 518, in _fast_trace_task
tasks, accept, hostname = _loc
ValueError: not enough values to unpack (expected 3, got 0)

Fix

See pull request #4078

@drewdogg

This comment has been minimized.

drewdogg commented Jun 24, 2017

FWIW I worked around this by using the eventlet pool implementation ("-P eventlet" command line option).

@felixhao28

This comment has been minimized.

felixhao28 commented Aug 1, 2017

@drewdogg's solution should be mentioned in the tutorial.

@np-8

This comment has been minimized.

np-8 commented Oct 23, 2017

I have to confirm: This bug appears on

Celery 4.1.0
Windows 10 Enterprise 64 bit

when running command celery -A <mymodule> worker -l info

and the following workaround works:

pip install eventlet
celery -A <mymodule> worker -l info -P eventlet
@auvipy

This comment has been minimized.

Member

auvipy commented Dec 6, 2017

it's enough to define FORKED_BY_MULTIPROCESSING=1 environment variable for the worker instance.

@auvipy auvipy closed this Dec 6, 2017

@zeleven

This comment has been minimized.

zeleven commented Apr 17, 2018

@auvipy Work for me, thanks.

@wonderfulsuccess

This comment has been minimized.

wonderfulsuccess commented Jul 28, 2018

@auvipy it really solve the problem : ) 👍
add:
import os
os.environ.setdefault('FORKED_BY_MULTIPROCESSING', '1')
before define Celery instance is enougth

@auvipy

This comment has been minimized.

Member

auvipy commented Aug 1, 2018

maybe this should be mentioned in docs? @wonderfulsuccess care to send a pull request?

@ajosecueto

This comment has been minimized.

ajosecueto commented Oct 28, 2018

@wonderfulsuccess

Thanks So Much

@auvipy auvipy added this to the v4.3 milestone Nov 2, 2018

@auvipy auvipy reopened this Nov 2, 2018

@auvipy auvipy modified the milestones: v4.3, v5.0.0 Nov 17, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment