adding ratelimit capability to async.queue #212

Closed
wants to merge 1 commit into
from

Projects

None yet

4 participants

@sramam
sramam commented Dec 16, 2012

see documentation & tests for details.

@sramam sramam adds rate limit capability to async.
see documentation & tests.
3c2c764
@coreyjewett
Contributor

Rate limiting is what queue already does; "minimum delay" would be a more appropriate name for the option you are proposing.

I only read the code, but I believe your implementation will only work as expected (or desired) when concurrency is 1. If so, wouldn't it be more appropriate to just wrapper your worker function with the delay? I have used some Underscore functions for this before.

Something like one of these might be what you want:

  var _ = require('underscore');
  var async = require('async');

  // delayed by batches
  var q = async.queue(function(id, cb){
    _.delay(function(){
      console.log(id, new Date().getTime())
      cb();
    }, 250);
  }, 3);
  for (var i = 1; i <= 11; i++) { q.push(i); }

  // delayed by batches; when concurrency is 1 this is effectively a delay between each task
  var q = async.queue(function(id, cb){
    _.delay(function(){
      console.log(id, new Date().getTime())
      cb();
    }, 250);
  }, 1);
  for (var i = 1; i <= 11; i++) { q.push(i); }


  // delayed by batches; leading task on an empty queue has no delay.
  var q = async.queue(function(id, cb){
    _.delay(function(){
      console.log(id, new Date().getTime())
      cb();
    }, q.__delayed ? 250 : 0);
    q.__delayed = true;
  }, 3);
  q.drain = function() {q.__delayed = false;}
  for (var i = 1; i <= 11; i++) { q.push(i); }

The behavior of the above examples is not exactly what you are proposing -- when there is a concurrency > 1 it basically causing batching behavior, but I believe this is identical to your implementation.

If you aren't using Underscore (and don't want to) then you could do this also using just setTimeout.

@sramam
sramam commented Dec 17, 2012

hmm... ratelimiting to me, irrespective of the #workers,
the app will not make more than 1 request every n milliseconds.

The pull request does exactly this.

That said, increasing concurrency with rate-limiting seems a contrived setup.
The only case where it's even valid is if the response takes longer than 'n' ms.

This implementation does not box the user for the unlikely need for multiple queues
with rate-limits.

The underscore/setTimeout implementation will require that a global state be
maintained across various invocations of the task-processor. Else it'll suffer
precisely from a batching effect you mention.

Many APIs have rate-limits as a function of service subscribed to.
This implementation makes a lot of semantic sense from the perspective of
application logic. Also, it maintains the flexibility that seems to be a hallmark of
async. Hopefully you agree and accept the pull request.

Happy to hear thoughts on how to improve it in any event..

@mitar
mitar commented Jan 19, 2013

+1

@caolan
Owner
caolan commented Mar 28, 2014

This feels like it should be a separate tool to the existing queue

@caolan caolan closed this Mar 28, 2014
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment