Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Doesn't seem to like Redis #8

Closed
volksman opened this issue Aug 20, 2012 · 5 comments
Closed

Doesn't seem to like Redis #8

volksman opened this issue Aug 20, 2012 · 5 comments

Comments

@volksman
Copy link

I use a redis backend for both brokering and results. I have two sites on my server both using Celery. My confs are something along the lines of:

BROKER_BACKEND = 'redis'
BROKER_HOST = 'localhost'
BROKER_PORT = 6379
BROKER_VHOST = '4'

CELERY_RESULT_BACKEND = 'redis'
REDIS_HOST = 'localhost'
REDIS_PORT = 6379
REDIS_DB   = 5
REDIS_CONNECT_RETRY = True
CELERYBEAT_SCHEDULE_FILENAME = PROJECT_ROOT + 'celerybeat-schedule'

and

BROKER_BACKEND = 'redis'
BROKER_HOST = 'localhost'
BROKER_PORT = 6379
BROKER_VHOST = '2'

CELERY_RESULT_BACKEND = 'redis'
REDIS_HOST = 'localhost'
REDIS_PORT = 6379
REDIS_DB   = 3
REDIS_CONNECT_RETRY = True
CELERYBEAT_SCHEDULE_FILENAME = PROJECT_ROOT + 'celerybeat-schedule'

Note the DB numbers differ.

I run celeryb and celeryd in each virtualenv for the sites in question:

celerybeat --loglevel DEBUG --pidfile=/tmp/www_celerybeat.pid
celerybeat --loglevel DEBUG --pidfile=/tmp/portal_celerybeat.pid
celeryd --loglevel DEBUG -c2 -E --pidfile=/tmp/portal_celeryd.pid
celeryd --loglevel DEBUG -c2 -E --pidfile=/tmp/www_celeryd.pid

When I run flower in one of the environments I get the following issues:

  1. No workers appear in the worker view

  2. Let's say I run flower in env1. My celeryd.error log from env2 start spitting out errors (at least one per second) like this:

[2012-08-20 15:33:27,046: DEBUG/MainProcess] * Dump of currently registered tasks:
celery.backend_cleanup
celery.chain
celery.chord
celery.chord_unlock
celery.chunks
celery.group
celery.map
celery.starmap
rsvp.tasks.send_mail_task
[2012-08-20 15:33:27,047: ERROR/MainProcess] Control command error: InconsistencyError("Queue list empty or key does not exist: u'_kombu.binding.reply.celery.pidbox'",)
Traceback (most recent call last):
  File "/media/sites/www.mydomain.com/myenv-env/local/lib/python2.7/site-packages/celery/worker/consumer.py", line 512, in on_control
    self.pidbox_node.handle_message(body, message)
  File "/media/sites/www.mydomain.com/myenv-env/local/lib/python2.7/site-packages/kombu/pidbox.py", line 103, in handle_message
    return self.dispatch(method, arguments, reply_to)
  File "/media/sites/www.mydomain.com/myenv-env/local/lib/python2.7/site-packages/kombu/pidbox.py", line 85, in dispatch
    routing_key=reply_to['routing_key'])
  File "/media/sites/www.mydomain.com/myenv-env/local/lib/python2.7/site-packages/kombu/pidbox.py", line 108, in reply
    channel=self.channel)
  File "/media/sites/www.mydomain.com/myenv-env/local/lib/python2.7/site-packages/kombu/pidbox.py", line 190, in _publish_reply
    producer.publish(reply, routing_key=routing_key)
  File "/media/sites/www.mydomain.com/myenv-env/local/lib/python2.7/site-packages/kombu/messaging.py", line 162, in publish
    immediate, exchange, declare)
  File "/media/sites/www.mydomain.com/myenv-env/local/lib/python2.7/site-packages/kombu/messaging.py", line 170, in _publish
    mandatory, immediate, exchange)
  File "/media/sites/www.mydomain.com/myenv-env/local/lib/python2.7/site-packages/kombu/entity.py", line 215, in publish
    immediate=immediate)
  File "/media/sites/www.mydomain.com/myenv-env/local/lib/python2.7/site-packages/kombu/transport/virtual/__init__.py", line 454, in basic_publish
    exchange, routing_key, **kwargs)
  File "/media/sites/www.mydomain.com/myenv-env/local/lib/python2.7/site-packages/kombu/transport/virtual/exchange.py", line 61, in deliver
    for queue in _lookup(exchange, routing_key):
  File "/media/sites/www.mydomain.com/myenv-env/local/lib/python2.7/site-packages/kombu/transport/virtual/__init__.py", line 535, in _lookup
    R = self.typeof(exchange).lookup(self.get_table(exchange),
  File "/media/sites/www.mydomain.com/myenv-env/local/lib/python2.7/site-packages/kombu/transport/redis.py", line 476, in get_table
    self.keyprefix_queue % exchange))
InconsistencyError: Queue list empty or key does not exist: u'_kombu.binding.reply.celery.pidbox'
[2012-08-20 15:33:27,047: DEBUG/MainProcess] Consumer: Closing broadcast channel...

Note the tasks listed at the top of the error are accurate for env2 not env1 where flower is running. Flower also reports tasks running in env2 regardless of the error above.

Is it my config or something broken in Flower?

@mher
Copy link
Owner

mher commented Aug 21, 2012

This doesn't look like a flower issue. Can you try 'celery inspect registered' command?

@volksman
Copy link
Author

Output of command requested:

celery inspect registered --broker=redis://lochost:6379/2
-> portal.mydomain.ca: OK
* celery.backend_cleanup
* celery.chain
* celery.chord
* celery.chord_unlock
* celery.chunks
* celery.group
* celery.map
* celery.starmap
* support.tasks.check_mail_task
* utils.tasks.send_mail_task

A bit more info:

Workers now shows my portal worker process (not the www which is what I would expect). However the errors in the www site flow as they did yesterday.

@ask
Copy link
Collaborator

ask commented Aug 22, 2012

For the workers not showing up, do you have the yajl json library install by chance?

@mher
Copy link
Owner

mher commented Aug 22, 2012

@volksman could you create a minimal project which demonstrates the problem? All our tests didn't reveal anything suspicious. Also it would be useful to have the output of 'celery report' and 'flower --logging=debug'.

@mher mher closed this as completed Oct 13, 2012
@Morpho
Copy link

Morpho commented Apr 18, 2013

Hi, I got the same problem. Its not related to flower. On my server I have 3 Django projects running celery with redis. Each project uses a separate redis DB. Project1 and Project2 use the same pluggable app, so many workers got the same name (i.e. Project1.MyPluggableApp.MyTask and Project2.MyPluggableApp.MyTask). When I start Project2's worker while Project1's worker is already running the above error occurs. But it doesnt occur when I start Project3, which has other tasks-names (i.e. Project3.OtherApp.OtherTask). So I guess its somehow related to naming conventions of worker-names or ids within the redis db.

This is my traceback: http://nopaste.info/c54cc5f020.html

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants