Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Browse files

Do not scale down workers if there are more than one working worker.

When after_perform_scale_down is called it will always have at least one working worker, which is itself, but it may have more workers working.

We can't just scale down based on the number of pending jobs, because if multiple jobs are queued at the same time and we have this many number of workers, we may have many working jobs and no pending job, yet we can't kill those workers.

By checking if there are any working workers besides the job itself we prevent this from happening.
  • Loading branch information...
commit b6ee7c309708a46af2c7c4ba783583a566a646a9 1 parent 4dc4881
@vicentemundim vicentemundim authored
Showing with 7 additions and 2 deletions.
  1. +7 −2 lib/heroku-resque-auto-scale.rb
View
9 lib/heroku-resque-auto-scale.rb
@@ -16,12 +16,17 @@ def workers=(qty)
def job_count
Resque.info[:pending].to_i
end
+
+ def working_job_count
+ Resque.info[:working].to_i
+ end
end
end
def after_perform_scale_down(*args)
- # Nothing fancy, just shut everything down if we have no jobs
- Scaler.workers = 0 if Scaler.job_count.zero?
+ # Nothing fancy, just shut everything down if we have no pending jobs
+ # and one working job (which is this job)
+ Scaler.workers = 0 if Scaler.job_count.zero? && Scaler.working_job_count == 1
end
def after_enqueue_scale_up(*args)
Please sign in to comment.
Something went wrong with that request. Please try again.