Join GitHub today
Ent Unique Jobs
If your application code creates duplicate jobs, the unique jobs feature in Sidekiq Enterprise makes it easy to ensure only a single copy of a job is in Redis. For instance, perhaps you create a job to sync an address change with a 3rd party system every time a form is submitted. If the form is submitted twice, you don't need to create the second job if the first job is still pending.
See unique jobs in action here:
To activate the feature, enable it in your initializer:
# Disable uniqueness in testing, this has the potential to cause much head scratching... Sidekiq::Enterprise.unique! unless Rails.env.test?
Declare a time period during which your job should be considered unique:
class MyWorker include Sidekiq::Worker sidekiq_options unique_for: 10.minutes def perform(...) end end
This means that a second job can be pushed to Redis after 10 minutes or after the first job has successfully processed. If your job retries for a while, 10 minutes can pass, thus allowing another copy of the same job to be pushed to Redis. Design your jobs so that uniqueness is considered best effort, not a 100% guarantee A time limit is mandatory so that if a process crashes, any locks it is holding won't last forever.
Jobs are considered unique based on
(class, args, queue), meaning a job with the same args can be pushed to different queues.
The uniqueness period for a scheduled job includes the delay time. If you use
MyWorker.perform_in(1.hour, ...), the uniqueness for this job will last 70 minutes (1.hour + the 10.minutes unique_for TTL). You won't be able to push the same job until it runs successfully or 70 minutes have passed.
A job that is pending retry will still hold the unique lock and prevent further jobs from being enqueued until the retry succeeds or the timeout passes. Manually removing the job from the retry queue will not remove the lock.
unique_until option allows you to control when the unique lock is removed. The default value is
success: the job will not unlock until it executes successfully, it will remain locked even if it raises an error and goes into the retry queue.
The alternative value is
start: the job will unlock right before it starts executing. This fixes the possibility of a race condition for some unique jobs between the finish of the job and unlocking: read #3471 for detail.
sidekiq_options unique_for: 20.minutes, unique_until: :start
If the job raises an error, it will not try to retake the lock and may be duplicate enqueued. I recommend avoiding the
start policy unless you know your job is affected by the race condition.
If you really want to push a job and bypass the uniqueness check, pass false as the
The uniqueness feature makes an extra call to Redis before pushing the job. This network call is not protected by
reliable_push so uniqueness can raise network errors in your webapp and cause the push to Redis to fail. This is a trait of any client middleware that makes a network call and not specific to uniqueness.