Navigation Menu

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Celery task runs within the HTTP request/response cycle instead of worker #1821

Closed
scottwoodall opened this issue Oct 2, 2018 · 4 comments
Closed

Comments

@scottwoodall
Copy link

scottwoodall commented Oct 2, 2018

What happened?

Called the taskapp.debug_task from within the Django admin's save_model method and the task ran within the HTTP request/response cycle.

What should've happened instead?

I would of expected to see this job queued with Celery and see a log entry that the worker ran the task.

Steps to reproduce

[//]: MacOS 10.14, Docker

  1. cookiecutter https://github.com/pydanny/cookiecutter-django (take all defaults except answer Y to docker and celery.
  2. cd my_awesome_project; docker-compose -f local.yml build
  3. Edit my_awesome_project/users/admin.py and add code that runs celery task to class UserAdmin:
def save_model(self, request, obj, form, change):
    super().save_model(request, obj, form, change)

    if not change:
        debug_task.delay()
  1. docker-compose -f local.yml run --rm django python manage.py createsuperuser
  2. Log in and add a new user http://127.0.0.1:8000/admin/users/user/add/
  3. See output in docker log that the task was run by the django_1 container: django_1 | Request: <Context: {'id': '15dab4e4-2cca-4b65-8f75-2b4c40145e78', 'retries': 0, 'is_eager': True, 'logfile': None, 'loglevel': 0, 'hostname': '924647935b2f', 'callbacks': None, 'errbacks': None, 'headers': None, 'delivery_info': {'is_eager': True}, 'args': (), 'called_directly': False, 'kwargs': {}}>

I added a sleep(10) within debug_task to verify it was within the http request/response cycle and the browser hangs for 10 seconds then continues on.

@sharmi
Copy link

sharmi commented Oct 4, 2018

That is because, in debug mode, the celery task is set to run in sync mode. That helps for easier debugging during development. When run the production version (docker-compose -f production.yml up), you will get the job queued and async.

check the config\settings\local.py. You will see the following config. This is what makes Celery act in sync.

# ------------------------------------------------------------------------------
# http://docs.celeryproject.org/en/latest/userguide/configuration.html#task-always-eager
CELERY_TASK_ALWAYS_EAGER = True
# http://docs.celeryproject.org/en/latest/userguide/configuration.html#task-eager-propagates
CELERY_TASK_EAGER_PROPAGATES = True

Settings those values to false, will make Celery queue the jobs.

@scottwoodall
Copy link
Author

Awww thank you. It felt like one of those things where I was the one doing something wrong. I wasn't aware celery could be configured in that fashion.

Thank you for the information and feedback.

@the1plummie
Copy link

The whole point of using docker compose and containers is that you can have development environment as close to production as possible. So making celery tasks sync mode by default is unexpected. I also spent couple of hours trying to understand the behavior. Sure that's because I'm a newbie to celery but isn't that the point of using cookiecutter? I would recommend changing the default behavior to queuing and async, but leave the option to enable sync if debug is needed.

@browniebroke
Copy link
Member

The default value was actually changed in #1945. This should not be a problem anymore.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants