Simple Redis message queue manager with flow control for Ruby applications.
Smooth Queue manages message queues in Redis making sure that your application don't try process more messages than it can handle.
Sometimes you don't want to process messages as fast as possible. Wat?
A classic example is when you use a background processing framework (such as Resque or Sidekiq), and you don't want certain workers to process more than 10 jobs simultaneously even though your background processor can handle more 50 jobs at a time.
Q: Why would you not want to execute every single task in your application as fast as possible?
A: Rate limiting is often a feature requirement/constraint. Here are a few examples:
-
The processing of certain messages are so expensive that can stress your application and cause slowness in the rest of the application.
-
The processing of certain messages result in API calls, which target server imposes rate limiting.
-
You cannot afford computing power to handle excessive load when certain jobs are executed with high concurrency
Q: Can't background job libraries already do this?
A: Kinda. They can, but none can do this efficiently, because their main focus is always get jobs done ASAP.
-
Sidekiq: there are a couple options to achieve flow control, but none is super efficient:
- You can use custom job fetcher plugins, but then you're giving up on the extraordinary reliability that Mike Perham has implemented for Sidekiq. This gets even worse if you pay for Sidekiq Pro, because it has even more reliable fetch, but you would be overriding it
- If you pay for Sidekiq Enterprise, you can try to use the Limiter, but its algorithm is unfair and inneficient as rate limiter
-
Resque and Sidekiq: Create multiple queues and launch one Resque/Sidekiq process per queue that you want to limit, so that you can specify the number of workers that will be available for that queue. There are two disadvantages on this approach: (a) the more job types you need to limit, the more unnecessary memory usage the server will have, for instance, 5 job types that need specific worker counts means 5 ruby processes; and (b) if you have server redundancy, you're gonna have to devide the worker counts by the number of servers you have, which is an excessive ops work
SmoothQueue.configure do |config|
config.add_queue('heavy_lifting', max_concurrency = 6) do |id, message|
# The id is generated by SmoothQueue to identify the message being processed
HeavyLiftingWorker.perform_later(id, message) # Make sure this operation is ~O(1)
end
config.add_queue('very_heavy_lifting', max_concurrency = 3) do |id, message|
# The id is generated by SmoothQueue to identify the message being processed
VeryHeavyLiftingWorker.perform_later(id, message) # Make sure this operation is ~O(1)
end
end
SmoothQueue.loop_async! # Starts a very lightweight thread that monitor the queues
class HeavyLiftingWorker
include SomeBackgroundJobFramework
def process(id, message)
# Make some heavy processing with the message { 'foo' => 'bar' }
SmoothQueue.done(id)
end
end
class VeryHeavyLiftingWorker
include SomeBackgroundJobFramework
def process(id, message)
# Make some heavy processing with the message { 'bar' => 'baz' }
SmoothQueue.done(id)
end
end
SmoothQueue.enqueue('heavy_lifting', { 'foo' => 'bar' })
SmoothQueue.enqueue('very_heavy_lifting', { 'bar' => 'baz' })
Please see LICENSE for licensing details.