Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP

Loading…

Do not scale down workers if there are more than one working worker #3

Merged
merged 2 commits into from

2 participants

@vicentemundim

When after_perform_scale_down is called it will always have at least one working worker, which is itself, but it may have more workers working.

We can't just scale down based on the number of pending jobs, because if multiple jobs are queued at the same time and we have this many number of workers, we may have many working jobs and no pending job, yet we can't kill those workers.

By checking if there are any working workers besides the job itself we prevent this from happening.

vicentemundim added some commits
@vicentemundim vicentemundim Do not scale down workers if there are more than one working worker.
When after_perform_scale_down is called it will always have at least one working worker, which is itself, but it may have more workers working.

We can't just scale down based on the number of pending jobs, because if multiple jobs are queued at the same time and we have this many number of workers, we may have many working jobs and no pending job, yet we can't kill those workers.

By checking if there are any working workers besides the job itself we prevent this from happening.
b6ee7c3
@vicentemundim vicentemundim Using ps_scale heroku command instead of set_workers to be compatible…
… with Cedar apps
aeb2c0f
@markquezada markquezada merged commit 7e0660b into markquezada:master
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Commits on Sep 17, 2012
  1. @vicentemundim

    Do not scale down workers if there are more than one working worker.

    vicentemundim authored
    When after_perform_scale_down is called it will always have at least one working worker, which is itself, but it may have more workers working.
    
    We can't just scale down based on the number of pending jobs, because if multiple jobs are queued at the same time and we have this many number of workers, we may have many working jobs and no pending job, yet we can't kill those workers.
    
    By checking if there are any working workers besides the job itself we prevent this from happening.
  2. @vicentemundim
This page is out of date. Refresh to see the latest.
Showing with 8 additions and 3 deletions.
  1. +8 −3 lib/heroku-resque-auto-scale.rb
View
11 lib/heroku-resque-auto-scale.rb
@@ -10,18 +10,23 @@ def workers
end
def workers=(qty)
- @@heroku.set_workers(ENV['HEROKU_APP'], qty)
+ @@heroku.ps_scale(ENV['HEROKU_APP'], :type => :worker, :qty => qty)
end
def job_count
Resque.info[:pending].to_i
end
+
+ def working_job_count
+ Resque.info[:working].to_i
+ end
end
end
def after_perform_scale_down(*args)
- # Nothing fancy, just shut everything down if we have no jobs
- Scaler.workers = 0 if Scaler.job_count.zero?
+ # Nothing fancy, just shut everything down if we have no pending jobs
+ # and one working job (which is this job)
+ Scaler.workers = 0 if Scaler.job_count.zero? && Scaler.working_job_count == 1
end
def after_enqueue_scale_up(*args)
Something went wrong with that request. Please try again.