Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mixing queues list and connection configurations #28

Open
meteozond opened this issue May 1, 2013 · 5 comments
Open

Mixing queues list and connection configurations #28

meteozond opened this issue May 1, 2013 · 5 comments

Comments

@meteozond
Copy link

Hello, I think that mixing connections data and queue names in configuration is not a good idea. At least it won't give you to create queues dynamically (CELERY_CREATE_MISSING_QUEUES analogue).

More than that there is some kind of duplication, because queues list is stored in the hash of each worker.

As I can see this was done only because of only one reason - to show queues list in the interface.

It would be more consistent to pass the combination of connection and queue names as the arguments of worker manage command and extract data about workers and queues directly from the redis.

What do you think about such pattern?

@acjay
Copy link
Contributor

acjay commented May 1, 2013

If you're using django-redis for your caching, I just added an option that allows a level of indirection, using your connection definition for Redis in your cache as the connection info for one or more queues. See the docs for this.

But on a wider not, I think you do have a point, and it may be beyond the scope of this package. It seems to me that it's really not ideal to define the connection in the cache settings either. It would make a lot more sense to have a package that defines and handles the connections, and all of the services that use Redis would ride on that--caching, queuing, etc.

I'm thinking https://github.com/niwibe/django-redis is probably the place to put this into action. I think it's the most generalist of the Django Redis packages, and it would be really cool to see everything come together around it. That's just my opinion.

@selwin
Copy link
Collaborator

selwin commented May 7, 2013

@meteozond the main purpose of django-rq is to provide convenience. For me, being able to centrally define the queues and refer to them by name, is a big win because this means you don't need to worry about managing the low level Redis connection in your application code.

I agree with @acjay 's assessment that it would be useful if all Redis backed services could agree on a single syntax, or use a single package whose sole purpose is to manage Redis connections. If that were to happen we can definitely support that:

REDIS_BACKENDS = {
    'server1': {
        ...
    },
    'server2': {
        ...
    }
}

As to creating queues dynamically, you can do so without using django-rq at all:

from redis import Redis
from rq import Queue

for name in dynamic_queue_names:
    queue = Queue(name, connection=Redis())

As for your second point of extracting queue and worker data directly from Redis in addition of the queues defined in settings, I'll gladly accept a patch for that.

@meteozond
Copy link
Author

@selwin I'm absolutely agree with you that today there is no sigle backend statement syntax. More than that, disputing with my colleague I've found out that key-value storages are not caches and shouldn't be connected as a cache backends. Today it is autonomous technology like data bases, caches, templates, it should not be mixed with others. I think we need some kind of proxy library to provide unified interface for key-value features (including pub and sub and so on) for differed backends.
At my previous job we had some internal solution but it didn't go open. I believe some day django will provide key-value storage backends support out of the box.

@acjay I've got your point, I just wanted to say to things:

  1. Working with queues should be more flexible.
  2. Backend settings should be separated from the que-management, because it is some kind of different things.

@meteozond
Copy link
Author

I've got some time to put part of my idea into code.
cybergrom@8b706d5
cybergrom@474604b
cybergrom@cb5c207

  1. Now to create new queues is enough to enqueue new task.
  2. Changed RQ_QUEUES to RQ_CONNECTIONS (with same syntax).
  3. Removed get_queue_by_index requirement.
  4. Enhanced arguments of rqworker and rqscheduler for specifying connection
  5. Added argument connection_name for some methods
  6. Old rqworker and rqscheduler arguments could be used as same as earlier

./manage.py rqworker - will start worker for default queue in the default connection
./manage.py rqworker low - will start worker for low queue in the default connection
./manage.py rqworker redis.low - will start worker for low queue in the redis connection

Any suggestions?
Could it be a merge?

@acjay
Copy link
Contributor

acjay commented Sep 17, 2013

I somehow missed your previoius reply, sorry! Haven't checked out your commit to comment on implementation, but to the extent that my vote counts, I'm all for anything that increases DRYness and flexibility. My change was meant less as the "optimal" setup, and more so a backwardly compatible extension. That's my 2 cents :)

@SpecLad SpecLad mentioned this issue Aug 3, 2023
11 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants