Scale down the workers when no more are needed...
This may result in a lot of queries, but it reduces the costs of the workers on Heroku, when 1 long running proces keeps a lot of workers up...
Adding Coveralls support
This is to see the spec/test coverage
SimpleConv to check coverage
Merge branch 'mongoid_fix'
Downscale the workers when they are no longer needed.
This will increase the hits on the heroku api, but let just if that's a real problem?
This works, so remove the pending statement.
Coverage remained the same when pulling 2ffc67b on dynamic_scale_down into 892a9a9 on master.
Hey, this is a really nice fork.
You may however, want to document in this fork/pull request that when setting WORKLESS_MAX_WORKERS to a value greater than one for heroku cedar, Delayed::Job needs to be configured with:
Delayed::Worker.raise_signal_exceptions = :term
otherwise, a race condition occurs whereby if you have multiple workers running and the first worker finishes but a second worker is still going, the call to ::Heroku::API.post_ps_scale is always incorrectly killing the last worker.
This results in a Delayed::Job record that still thinks it's locked to a worker process, however that worker process has been SIGKILL'ed by heroku.
Because this hanging job remains locked (even though it's not being processed), workless doesn't spin down the remaining worker (which isn't doing anything) and you get a race condition with a job that doesn't finish and a worker that doesn't spin down.
Very nice catch! I didn't know about the 'raise_signal_exceptions' option! Maybe this can solve another issue we have on the lostboy branch right now that the workers aren't scaled down! Thanks again!