Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

Already on GitHub? Sign in to your account

Tripping Heroku API Limits #33

aew opened this Issue Nov 15, 2012 · 9 comments


None yet
4 participants

aew commented Nov 15, 2012

I've run into an issue where, apparently, the Heroku API is being throttled. Workless is the only gem that depends on Heroku API which depends on Excon. The error message is as follows:

"Expected(200) <=> Actual(429 Unknown)
request => {:chunk_size=>1048576, :connect_timeout=>60, :headers=>{""Accept""=>""application/json"", ""Accept-Encoding""=>""gzip"", ""User-Agent""=>""heroku-rb/0.3.6"", ""X-Ruby-Version""=>""1.9.2"", ""X-Ruby-Platform""=>""x86_64-linux"", ""Authorization""=>""XXXX="", ""Host""=>""api.heroku.com:443""}, :instrumentor_name=>""excon"", :mock=>false, :nonblock=>false, :read_timeout=>60, :retry_limit=>4, :ssl_ca_file=>""/app/vendor/bundle/ruby/1.9.1/gems/excon-0.16.8/data/cacert.pem"", :ssl_verify_peer=>true, :write_timeout=>60, :host=>""api.heroku.com"", :host_port=>""api.heroku.com:443"", :path=>""/apps/hdosportco/ps"", :port=>""443"", :query=>nil, :scheme=>""https"", :expects=>200, :method=>:get} response => #<Excon::Response:0x007f01b0fd3910 @Body=""{""error"":""Your account reached the API rate limit\nPlease wait a few minutes before making new requests""}"", @headers={""Cache-Control""=>""no-cache"", ""Content-Type""=>""application/json; charset=utf-8"", ""Date""=>""Thu, 15 Nov 2012 18:11:22 GMT"", ""Retry-After""=>""1353003093"", ""Status""=>""429"", ""Strict-Transport-Security""=>""max-age=500"", ""X-RateLimit-Limit""=>""486"", ""X-RateLimit-Remaining""=>""0"", ""X-Runtime""=>""54"", ""Content-Length""=>""105"", ""Connection""=>""keep-alive""}, @status=429>"

This seems to prompt a retry. I had the workers set to 10. Is it possible that workless was scaling workers up and down too rapidly with a volatile work queue? I could certainly be wrong about it as I haven't dug too deeply. I' set max workers to 4 to see if this would help, but it does not.

davidakachaos added a commit to davidakachaos/workless that referenced this issue Dec 7, 2012

Scale the workers after_commit
This will scale the workers after committing the changes
to the database. This will fix issue #34.

Main point of this is to relax the scaling to happen when
the jobs are in the database. This may also have an effect 
on issue #33.

I'm running into API limits now.

The issue is that workless is calling heroku ps endpoint to get the number of workers each time a job is created and deleted, to figure out if it needs to scale the number of workers. If you have a lot of jobs it's very easy to hit heroku api limits.

It basically means that workless, as it is now, is only usable if you create a modest number of jobs (i.e. rate of job creation has two be less than half of what the heroku api rate limit is because each job causes two calls to ps endpoint).

geemus commented Jul 15, 2013

Another (possible) further improvement on this might be to simply write the last known worker count to a file as a cache. That way you only have to hit the API if this file doesn't exist and/or when you need to change the value (at which point the process would also write the file). There are some synchronization/mutex type concerns there, but since setting the scale should be idempotent you can probably not worry about it and/or do this rather naively. Might be worth a try anyway, since it should reduce the usage when the queue has work from 1-2 calls per job to 0-1 calls per job (and probably usually 0). Happy to discuss further and/or help implement, but I wanted to start the discussion first in case I missed something (I know the Heroku/API side well, but pretty new to workless).

It's a good idea, but file is out of the question since there's no shared disk between heroku dynos and the filesystem is ephemeral anyway.

What might work is just storing the number in Rails.cache, but only if it's using a shared storage implementation, like external memcache. Because of that I think it would have to be a config option that is off by default.

geemus commented Jul 15, 2013

@radanskoric - yeah, good point. I was trying to figure out good places to put the info that would be available (and not heroku config vars as that is problematic in terms of causing releases and stuff). Seems like storing it somewhere (rather than always querying) would be preferable. In a cache or even the DB would be better in some ways, but it becomes harder to generalize toward something that works for everybody.

Unfortunately, Heroku config vars are also out of the question. Setting them also requires api calls and triggers a restart of all dynos.

The most general solution would be to allow the user to provide worker count storing and retrieving implementations as a config and provide in the docs a sample Rails cache based implementation. That way, the user can easily change the storage to something else like the DB if they wish.

geemus commented Jul 15, 2013

Alternatively, perhaps we can rely on existing mechanisms here? ie instead of having this happen during every job it could add a new job to the queue (perhaps just if one doesn't already exist) that would do the scaling? Not exactly sure of the specifics, but something like that might work in that you can expect that the queue/delay stuff should be working or you wouldn't be using this in the first place.

Are you referring to delayed job functionality of scheduling a job for the future?

For the scheduling functionality to work, either a worker needs to be running for the whole time or something else needs to wake it up when it's time and that would defeat the whole purpose of workless.

Actually, just generally speaking, if you need to have jobs scheduled in the future and you are not prepared to have at least one worker dyno running, waiting for the job's time to come, you need something more powerful than workless.

Workless is basically a very good intermediate solution to save money while you still don't need a more powerfull external worker management solution. For example, one of the apps I'm working on is still using workless while I've switched another to be managed by hirefire.io services.

geemus commented Jul 15, 2013

@radanskoric - fair enough. Perhaps it is simply a matter of switching to something else once you get to a certain level.


lostboy commented Aug 19, 2013

Closing this due to inactivity and because a PR has been merged

@lostboy lostboy closed this Aug 19, 2013

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment