Start/stop Sidekiq workers on Heroku
Fetching latest commit…
Cannot retrieve the latest commit at this time.

Sidekiq Heroku Autoscaler

Sidekiq performs background jobs. While its threading model allows it to scale easier than worker-pre-process background systems, people running test or lightly loaded systems on Heroku still want to scale down to zero to avoid racking up charges.


Tested on Ruby 2.1.7 and Heroku Cedar stack.


gem install autoscaler

Getting Started

This gem uses the Heroku Platform-Api gem, which requires an OAuth token from Heroku. It will also need the heroku app name. By default, these are specified through environment variables. You can also pass them to HerokuPlatformScaler explicitly.


Install the middleware in your Sidekiq.configure_ blocks

require 'autoscaler/sidekiq'
require 'autoscaler/heroku_platform_scaler'

Sidekiq.configure_client do |config|
  config.client_middleware do |chain|
    chain.add Autoscaler::Sidekiq::Client, 'default' =>

Sidekiq.configure_server do |config|
  config.server_middleware do |chain|
    chain.add(Autoscaler::Sidekiq::Server,, 60) # 60 second timeout

Limits and Challenges

  • HerokuPlatformScaler includes an attempt at current-worker cache that may be overcomplication, and doesn't work very well on the server
  • Multiple scale-down loops may be started, particularly if there are multiple jobs queued when the servers comes up. Heroku seems to handle multiple scale-down commands well.
  • The scale-down monitor is triggered on job completion (and server middleware is only run around jobs), so if the server never processes any jobs, it won't turn off.
  • The retry and schedule lists are considered - if you schedule a long-running task, the process will not scale-down.
  • If background jobs trigger jobs in other scaled processes, please note you'll need config.client_middleware in your Sidekiq.configure_server block in order to scale-up.
  • Exceptions while calling the Heroku API are caught and printed by default. See HerokuPlatformScaler#exception_handler to override



You can pass a scaling strategy object instead of the timeout to the server middleware. The object (or lambda) should respond to #call(system, idle_time) and return the desired number of workers. See lib/autoscaler/binary_scaling_strategy.rb for an example.

Initial Workers

Client#set_initial_workers to start workers on main process startup; typically:

Autoscaler::Sidekiq::Client.add_to_chain(chain, 'default' => heroku).set_initial_workers

Working caching

scaler.counter_cache =


The project is setup to run RSpec with Guard. It expects a redis instance on a custom port, which is started by the Guardfile.

The HerokuPlatformScaler is not tested by default because it makes live API requests. Specify AUTOSCALER_HEROKU_APP and AUTOSCALER_HEROKU_ACCESS_TOKEN on the command line, and then watch your app's logs.

heroku logs --app ...


Justin Love, @wondible,



Released under the MIT license.