Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Celery does not pick up new tasks and does not execute old tasks according to the schedule on Ubuntu + Django #7913

Open
13 of 18 tasks
yashmjshinichi opened this issue Nov 19, 2022 · 6 comments

Comments

@yashmjshinichi
Copy link

yashmjshinichi commented Nov 19, 2022

Checklist

  • I have verified that the issue exists against the master branch of Celery.
  • This has already been asked to the discussions forum first.
  • I have read the relevant section in the
    contribution guide
    on reporting bugs.
  • I have checked the issues list
    for similar or identical bug reports.
  • I have checked the pull requests list
    for existing proposed fixes.
  • I have checked the commit log
    to find out if the bug was already fixed in the master branch.
  • I have included all related issues and possible duplicate issues
    in this issue (If there are none, check this box anyway).

Mandatory Debugging Information

  • I have included the output of celery -A proj report in the issue.
    (if you are not able to do this, then at least specify the Celery
    version affected).
  • I have verified that the issue exists against the master branch of Celery.
  • I have included the contents of pip freeze in the issue.
  • I have included all the versions of all the external dependencies required
    to reproduce this bug.

Optional Debugging Information

  • I have tried reproducing the issue on more than one Python version
    and/or implementation.
  • I have tried reproducing the issue on more than one message broker and/or
    result backend.
  • I have tried reproducing the issue on more than one version of the message
    broker and/or result backend.
  • I have tried reproducing the issue on more than one operating system.
  • I have tried reproducing the issue on more than one workers pool.
  • I have tried reproducing the issue with autoscaling, retries,
    ETA/Countdown & rate limits disabled.
  • I have tried reproducing the issue after downgrading
    and/or upgrading Celery and its dependencies.

Related Issues and Possible Duplicates

Related Issues

Possible Duplicates

Environment & Settings

Celery version: celery==5.2.7

celery report Output:

CELERY_ACCEPT_CONTENT: ['application/json']
CELERY_BEAT_SCHEDULER: 'django_celery_beat.schedulers:DatabaseScheduler'
CELERY_ENABLE_UTC: False
CELERY_REDIS_BACKEND_USE_SSL: {
 'ssl_cert_reqs': <VerifyMode.CERT_REQUIRED: 2>}
CELERY_RESULT_BACKEND: 'django-db'
CELERY_RESULT_SERIALIZER: 'json'
CELERY_TASK_SERIALIZER: 'json'
CELERY_TIMEZONE: 'Asia/Kolkata'

INSTALLED_APPS: [
 'django_celery_results',
 'django_celery_beat']


Steps to Reproduce

Required Dependencies

  • Minimal Python Version: python 3.8
  • Minimal Celery Version: 5.2.7
  • Minimal Kombu Version: N/A or Unknown
  • Minimal Broker Version: Redis 4.3.4
  • Minimal Result Backend Version: N/A or Unknown
  • Minimal OS and/or Kernel Version: Ubuntu 22.04.1 LTS x86_64
  • Minimal Broker Client Version: N/A or Unknown
  • Minimal Result Backend Client Version: N/A or Unknown

Python Packages

pip freeze Output:

amqp==5.1.1
asgiref==3.5.2
async-timeout==4.0.2
billiard==3.6.4.0
boto3==1.24.89
botocore==1.27.89
CacheControl==0.12.11
cachetools==5.2.0
celery==5.2.7
certifi==2022.9.24
cffi==1.15.1
charset-normalizer==2.1.1
click==8.1.3
click-didyoumean==0.3.0
click-plugins==1.1.1
click-repl==0.2.0
cryptography==38.0.1
Deprecated==1.2.13
Django==4.0.8
django-celery-beat==2.3.0
django-celery-results==2.4.0
django-cors-headers==3.13.0
django-jazzmin==2.5.0
django-timezone-field==5.0
djangorestframework==3.14.0
djangorestframework-simplejwt==5.2.1
firebase-admin==6.0.1
google-api-core==2.10.2
google-api-python-client==2.65.0
google-auth==2.13.0
google-auth-httplib2==0.1.0
google-cloud-core==2.3.2
google-cloud-firestore==2.7.2
google-cloud-storage==2.5.0
google-crc32c==1.5.0
google-resumable-media==2.4.0
googleapis-common-protos==1.56.4
grpcio==1.50.0
grpcio-status==1.50.0
httplib2==0.20.4
idna==3.4
install==1.3.5
jmespath==1.0.1
kombu==5.2.4
msgpack==1.0.4
packaging==21.3
Pillow==9.2.0
prompt-toolkit==3.0.31
proto-plus==1.22.1
protobuf==4.21.9
psycopg2==2.9.4
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycparser==2.21
pyfcm==1.5.4
PyJWT==2.5.0
pyparsing==3.0.9
python-crontab==2.6.0
python-dateutil==2.8.2
python-dotenv==0.21.0
pytz==2022.4
redis==4.3.4
requests==2.28.1
rsa==4.9
s3transfer==0.6.0
six==1.16.0
sqlparse==0.4.3
types-cryptography==3.3.23.1
tzdata==2022.4
uritemplate==4.1.1
urllib3==1.26.12
vine==5.0.0
wcwidth==0.2.5
wrapt==1.14.1

Other Dependencies

N/A

Minimally Reproducible Test Case

Expected Behavior

Tasks to be recognized by celery beat and worker. Each task executed as per schedule

Actual Behavior

The tasks listed in task.py are not picked up by celery beat and worker, The tasks previously working have stopped working and remain within the beat registery (celery inspect registered) even after they're deleted and redis is flushed (redis-cli flushall) but the new tasks aren't registered in the celery tables seen in Admin panel. Frankly enough, the schedule worked as per planned as of a few days ago but when i created a new task it suddenly stopped working and new tasks haven't been accepted. So far the output has been the same for Ubuntu and AWS

@open-collective-bot
Copy link

Hey @yashmjshinichi 👋,
Thank you for opening an issue. We will get back to you as soon as we can.
Also, check out our Open Collective and consider backing us - every little helps!

We also offer priority support for our sponsors.
If you require immediate assistance please consider sponsoring us.

@AnonimBella
Copy link

I got the same error

@yashmjshinichi
Copy link
Author

@AnonimBella I have tried clearing the existing queue using celery -A myapp purge for redis broker but the issue persists. Does it work in your case?

@AnonimBella
Copy link

@AnonimBella I have tried clearing the existing queue using celery -A myapp purge for redis broker but the issue persists. Does it work in your case?

@yashmjshinichi Already solve this using redis flush. When I using celery purge there is no changes.

@yashmjshinichi
Copy link
Author

@AnonimBella I have tried clearing the existing queue using celery -A myapp purge for redis broker but the issue persists. Does it work in your case?

@yashmjshinichi Already solve this using redis flush. When I using celery purge there is no changes.

@AnonimBella I've tried flushing redis and purging the queue and sometimes tasks are picked up and sometimes they aren't. Wonder what's wrong.

This improvement was the result of switch to CELERY_BROKER_URL from BROKER_URL in my settings.py

@AnonimBella
Copy link

@yashmjshinichi I already use CELERY_BROKER_URL since I created this project. But I assume this case 'cause there is something error with my codes that it makes the queue stuck.

I got this case as well in laravel's queue, but I don't know if this will be happend in celery as well or not.

So I just added some try catch in my codes, and then execute redis flush and restart the celery. I still monitoring this case until now and the queue works well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants