Problems and Troubleshooting

Fernando Seror Garcia edited this page Dec 29, 2016 · 110 revisions


Read below for tips. If you still need help, you can:

You should not email any Sidekiq commmitter privately. Please respect our time and efforts by sticking to one of the two above. Remember also that Sidekiq is free, open source software: support is not guaranteed, it's best effort according to the availability of the Sidekiq committers. Sidekiq Pro customers get guaranteed support.


Sidekiq is multithreaded so your Workers must be thread-safe.

Use only thread-safe libraries!

Most popular Rubygems are thread-safe in my experience. A few exceptions to this rule...

Not thread safe gems:

  • right_aws
  • aws-s3
  • basecamp
  • therubyracer #270

Some gems can be troublesome:

Writing thread-safe code

Well-factored code is typically thread-safe without any changes. Always prefer instance variables and methods to class variables and methods. Require all necessary classes on startup so you aren't requiring code while executing jobs: Ruby's require statement is not atomic, as explained in this Stack Overflow answer.

Don't ever use Ruby's Timeout module. You will get mysterious stuck or hung processes randomly.

"Cannot find ModelName with ID=12345"

Sidekiq is so fast that it is quite easy to get transactional race conditions where a job will try to access a database record that has not committed yet. The clean solution is to use after_commit:

class User < ActiveRecord::Base
  after_commit :greet, :on => :create

  def greet

Note: after_commit will not be invoked in your tests if you have use_transactional_fixtures enabled, but test_after_commit has been written to help out in this case.

If you aren't using ActiveRecord models, use a scheduled perform to run after you can be sure the transaction has committed:

MyWorker.perform_in(5.seconds, 1, 2, 3)

Either way, Sidekiq's retry mechanism's got your back. The first time might fail with RecordNotFound but the retry will succeed.

Job status polling works in development, not in production

If you poll your model periodically (say, from an ajax request) to determine when your background job has completed, and your background completes in less than a second, you may run into an issue where your job polling logic works in development mode but sporadically in production.

This may be caused by rails' use of Rails.cache. By default, Model.cache_key is only precise to the second. Updates that start and finish during the same second may cause your status polling to return a stale record. In databases that support sub-second time values (such as postgres), set config.active_record.cache_timestamp_format = :nsec in config/application.rb to increase the cache precision and avoid stale records.

Heroku "ERR max number of clients reached"

You've hit the max number of Redis connections allowed by your plan.

Limit the number of redis connections per process in config/sidekiq.yml. For example, if you're on Redis To Go's free Nano plan and want to use the Sidekiq web client, you'll have to set the concurrency down to 3.

:concurrency:  3

See #117 for a discussion on the topic.

Sidekiq Web does not render correctly in production but works fine in development

Sidekiq Web wants to serve CSS/JS assets out of the gem. Your production web server is not forwarding CSS/JS requests to your app so Sidekiq Web can serve them but instead returning a 404 if they aren't found on the filesystem.

If you are using Rails 3.1 or 3.2 along with the asset pipeline, try putting the following into your file instead of specifying the route in routes.rb:

require 'sidekiq/web'

    "/" => Rails.application,
    "/sidekiq" => Sidekiq::Web

If you are using Nginx make sure to uncomment the line:

config.action_dispatch.x_sendfile_header = 'X-Accel-Redirect'

and comment this one:

# config.action_dispatch.x_sendfile_header = "X-Sendfile"

The threads are not processing jobs

If you are migrating from Resque make sure there the Redis database does not contain any old tasks. You can completely clear all keys and values in all databases with redis-cli flushall.

Another common problem is that you might have defined a namespace in Sidekiq.configure_server but not in Sidekiq.configure_client or named it something else. Make sure you configure both!

Another issue that some have experienced is caused by rspec-sidekiq. You need to make sure that rspec-sidekiq is under the test group ONLY :

group :test do
  gem 'rspec-sidekiq'

Related stackoverflow question can be found at

Too many connections to MongoDB

If you are using Mongoid you'll also want to use the kiqstand middleware to properly disconnect workers so your connections aren't overloaded.

Postgres connection corruption

If you see strange postgres connection errors, try using ActiveRecord's reaper to clean up connections. Add this to your database.yml:

reaping_frequency: 10

My Sidekiq process is disappearing!?

Linux's OOM killer might kill Sidekiq if your machine is running low on memory and can't swap. Use dmesg | egrep -i 'killed process' to search for OOM activity:

[102335.319388] Killed process 6567 (ruby) total-vm:1333004kB, anon-rss:355088kB, file-rss:688kB

The solution is to get more memory or optimize your workers. See Memory Bloat below for tips.

My Sidekiq process is crashing, what do I do?

Only two things can cause a Ruby VM to crash: a VM bug or a native gem bug. Sidekiq is pure Ruby and cannot crash the Ruby VM on its own. A couple of notes:

  • Ruby can have a bug - make sure you are running the latest Ruby version
  • native gem bugs can cause crashes - make sure you are running the latest version of all native gems so you have the latest fixes
  • every time the Sidekiq process crashes, any messages being processed are lost. You can avoid this with Sidekiq Pro's reliable fetch feature.

You can get a list of all native gems in your app with this command:

bundle exec ruby -e 'puts{ |i| !i.extensions.empty? }.map{ |i| }'

Sidekiq tries to use a connection from a child process without reconnecting

Since 2.9.0, Sidekiq assumes you don't touch redis until the app is booted and forked. Therefore, you'll get Redis::InheritedError if your code or a gem uses the Sidekiq client API before the app server has forked. For example: enqueuing a job upon app startup.

Jobs are mysteriously disappearing or failing without anything in the logs!

Often this is due to an old, left-over Sidekiq process that is still running. Make sure old processes are killed. Also, you can have this issue on multi-app server, if you don't properly set redis namespaces for each sidekiq instance.

Why isn't Sidekiq using all of the available cores?

Each Sidekiq process running on MRI will only use one core, regardless of the number of threads. To get the benefit of multiple cores, you should run several Sidekiq processes.


Be sure to boot the gems in your application by adding:


to the top of your main Sinatra file. Read more about booting Bundler on the Bundler site.

Memory bloat

If you have a memory bloating and your Sidekiq process goes from X MB to BIG MB over time, much of the time the cause is unoptimized ActiveRecord queries. Something in your Workers might be querying the database and loading tens or hundreds of thousands of ActiveRecord instances. Example:

# See if product search returns no results
# Terrible, do not do this!
return "No results" unless

If the product search returns 10,000 results, this query will create 10,000 objects and then immediately throw them away. This will expand the heap and cause VM bloat. For more information about how the Ruby heap works, check out these slides

The right way:

# See if product search returns no results
# Much faster!
return "No results" if == 0

Unfortunately it's up to you to determine which worker and query is causing the bloat. Another example:

Wrong, might load millions of user objects in memory:

User.all.each { |u| u.something }

Right, will iterate through 1000 users at a time:

User.find_each { |u| u.something }

In short, it is really easy to use ActiveRecord inefficiently. Read through your queries and make sure you understand exactly what each will do.

Frozen Processes

If your Sidekiq process is not performing any work, send it the TTIN signal to dump backtraces to the log. That will show you where the threads are stuck.

If your process "freezes" or no jobs seem to be finishing, it's possible a remote network call is pending forever. This is common in two scenarios:

  • DNS lookup - resolving a hostname might hang. This has a serious side effect in MRI of locking up everything because of the way MRI uses DNS by default. A possible solution is to run require 'resolv-replace' in your initializer, which installs a pure Ruby DNS resolver that works concurrently.
  • Net::HTTP - unresponsive remote servers can cause a Net::HTTP call to hang and lock up your threads. Set open_timeout to ensure your code raises an exception rather than hanging forever.

If the Sidekiq process is not responding to signals at all (nothing appears in the logs when you send TTIN), you can use GDB to dump backtraces for all threads:

sudo gdb `rbenv which ruby` [PID]
(gdb) info threads
  Id   Target Id         Frame 
  37   Thread 0x7f8b289d8700 (LWP 7994) "ruby-timer-thr" 0x00007f8b27a20d13 in *__GI___poll (fds=<optimized out>, fds@entry=0x7f8b289d7ec0, nfds=<optimized out>, 
    nfds@entry=1, timeout=timeout@entry=100) at ../sysdeps/unix/sysv/linux/poll.c:87
  36   Thread 0x7f8b23eb0700 (LWP 7995) "ruby" pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:162
  35   Thread 0x7f8b23c2e700 (LWP 7996) "ruby" pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:162
  34   Thread 0x7f8b239ac700 (LWP 7997) "ruby" pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:162
  33   Thread 0x7f8b237aa700 (LWP 7998) "ruby" pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:162
  32   Thread 0x7f8b28844700 (LWP 8002) "SignalSender" sem_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/sem_wait.S:86
  31   Thread 0x7f8b1e1bf700 (LWP 8003) "ruby" pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:162
  30   Thread 0x7f8b1e0be700 (LWP 8006) "ruby" pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:162
  29   Thread 0x7f8b1dd81700 (LWP 8009) "ruby" pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:162
  28   Thread 0x7f8b1dc80700 (LWP 8010) "ruby" pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:162
  27   Thread 0x7f8b1db7f700 (LWP 8011) "ruby" pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:162

(gdb) set logging file gdb_output.txt
(gdb) set logging on
(gdb) set height 10000
(gdb) t a a bt
(gdb) quit

Now put the contents of gdb_output.txt into a gist and open a Sidekiq issue.

You can get the Ruby backtrace of the current (hung) thread by running this in GDB. Note: it will print to the process's stdout, which might be a logfile, and will print upside down from the normal Ruby backtrace.

(gdb) call (void)rb_backtrace()

This is an excellent blog post about using GDB with Ruby.

Fixed race condition in heartbeat which could rarely lead to lingering processes on the Busy tab. [#2982]

to clean up lingering processes, modify this as necessary to connect to your Redis. After 60 seconds, lingering processes should disappear from the Busy page.

require 'redis'
r = "redis://localhost:6379/0")
# uncomment if you need a namespace
#require 'redis-namespace'
#r ="foo", redis: r)
r.smembers("processes").each do |pro|
  r.expire(pro, 60)
  r.expire("#{pro}:workers", 60)

No leak issues please

I don't accept generic memory leak issues for Sidekiq. Memory leaks can be caused by any part of the Ruby VM or gem in your application. Unless you can show evidence that Sidekiq is actually the root problem, please don't open an issue.

Read Sam Saffron's blog post about memory leaks for how to instrument and track down any leaks in your Sidekiq processes.

Previous: Sharding Next: Testimonials