Mike Perham edited this page Jun 4, 2015 · 11 revisions

Available in Sidekiq 3.0+

Sidekiq has a scalability limit: normally it can only use a single Redis server. This usually isn't a problem: Redis is really fast and on good hardware you can pump 5-10,000 jobs/sec through Redis before you'll start to hit the ceiling. For people who want to go beyond that limit you have two choices:

  • Break your application into smaller applications. You can use one Redis server per application.
  • Shard your application's jobs across many Redis instances.

How to Shard

The latter option is reasonably simple with the Sidekiq::Client updates in Sidekiq 3.0:

REDIS_A = { }
REDIS_B = { }

# To create a new connection pool for a namespaced Sidekiq worker: do
  client = => "Your Redis Url")"Your Namespace", :redis => client)

# Create a job in the default redis instance

# Push a job to REDIS_A using the low-level Client API
client =

Sidekiq::Client.via(REDIS_B) do
  # All jobs defined within this block will go to B


Sharding comes with some serious limitations:

  1. The Sidekiq API (all the code in sidekiq/api) does not support sharding, it only works with the default global Redis connection in Sidekiq.redis
  2. The Web UI has the same restriction. You need to start a different Rack process for each Redis shard you want to monitor with the Web UI. (Sidekiq Pro supports multiple Web UIs in the same process)
  3. You need to spin up Sidekiq processes to execute jobs from each Redis instance. A Sidekiq server process only executes jobs from a single Redis instance.
  4. Sharding increases the complexity of your system. It's harder to mentally track which jobs are going where, making debugging harder.

I don't recommend sharding unless all other options are unavailable.

Previous: Testing Next: Problems and Troubleshooting