Skip to content

Promise-based rate limiter for crawling/api requests using hierarchical token buckets.

License

Notifications You must be signed in to change notification settings

ConorCrowe/crawlimiter

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

crawlimiter

About

Crawlimiter is a promise-based rate limiter for node.js which is useful for crawling/api requests. Crawlimiter uses hierarchical token buckets to handle multi-level rate limits and also provides a request queue.

Usage

Import Crawlimiter module and declare a new limiter. Crawlimiter takes a list of rate limits. Each rate limit must contain the total number of requests and a time interval given in seconds. Also supports an optional initial quota field (if none given, initial quota will default to the request limit). The order in which the limits are provided does not matter.

const Crawlimiter = require("Crawlimiter");

const apiLimiter = new Crawlimiter([
    {requestLimit: 20, timeInterval: 6, initialQuota: 20},
    {requestLimit: 100, timeInterval: 60},
    {requestLimit: 200, timeInterval: 180},
]);

Once the limiter has been declared, requests can be queued using the enqueue method. Crawlimiter will resolve the promise to true if quota is available for the request. Requests will wait in the queue if no quota is available.

var result = apiLimiter.enqueue();

result.then(ticket => {
    if(ticket){
        //do something
    }
}).catch(error => console.error(error));

If requests need to stopped for some time the pause or stop methods can be used. Pause will stop quota being given to requests but will preserve the queue state. Stop will stop quota being given and also clear the queue.

apiLimiter.pause();
//OR
apiLimiter.stop();

About

Promise-based rate limiter for crawling/api requests using hierarchical token buckets.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages