-
-
Notifications
You must be signed in to change notification settings - Fork 4.7k
-
-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Maxsize configuration for task queues #2874
Comments
RabbitMQ does have features for this already, have you looked into solving it at the broker side? |
I've been using redis as both a broker and backend. I don't see where/how I'd configure a RabbitMQ broker to limit the number of queued tasks, but I agree that it seems to make more sense to handle this in the broker (assuming the broker manages queued tasks). I filed this issue half hoping that I'm missing something obvious. |
@ask if you'd be able to give me some direction (really just a file/lineno) to start looking, I'll start working on a PR to add this. |
It looks like the Right(TM) place implement the length check would be kombu's |
If you ask the broker what the size is, the size could have increased by millions of messages before you receive the reply, so I think it needs to be solved at broker level. |
As this require us to make patches to brokers I think we can safely close this as Not funded right now :( |
I'd like to configure the maximum size of a task queue for a celery cluster. When a maxsize is set, celery would either block or error when the number of queued tasks is (or is near) maxsize. I'd like to be able to either handle when calling
task.apply_async()
or decide to block and wait.My ultimate purpose for this is to apply "backpressure" to requests so that, if the queue becomes overloaded, new requests can be rejected. Currently, it seems that the celery queue (I'm using redis as a broker/backend) will fill with a very large number of requests. I'd like to reduce that number to one that my cluster could easily process in 3-4 seconds when operating at full capacity.
As far as I can tell, the best work around is to query
redis
orrabbitmq
before attempting to request that celery work on a new task. This is undesirable for a lot of reasons and requires me to research celery's internals.The text was updated successfully, but these errors were encountered: