You are the bagpiper.
It is convenient for us to use asynchrony or concurrency to promote our business speed in Node. However, if the amount of concurrency is too large, our server may not support it, and we'll need to limit its amount. The HTTP module contains http.Agent to control the amount of sockets our asynchronous API has packaged (usually) in advance. It is not realistic to change the inner API agent; let’s realize it on our own logical layer.
$ npm install bagpipe
The APIs exposed by Bagpipe only include constructor and instance methods push
.
Under original status, we may execute concurrent calls like this, forming 100 concurrent asynchronous invokes:
for (var i = 0; i < 100; i++) {
async(function () {
// Asynchronous call
});
}
If need to limit concurrency, what is your solution?
Solution from Bagpipe:
var Bagpipe = require('bagpipe');
// Sets the max concurrency as 100
var bagpipe = new Bagpipe(10);
for (var i = 0; i < 100; i++) {
bagpipe.push(async, function () {
// execute asynchronous callback
});
}
Yes. The invoke method only splits method, parameter and callback, then delivers it to bagpipe through push
.
How does Bagpipe compare with your anticipated solution?
refuse
, when queue is fulled, bagpipe will refuse the new async call and execute the callback with aTooMuchAsyncCallError
exception. defaultfalse
.timeout
, setting global ansyn call timeout. If async call doesn't complete in time, will execute the callback withBagpipeTimeoutError
exception. defaultnull
.
Bagpipe delivers invoke into inner queue through push
. If active invoke amount is less than max concurrent, it will be popped and executed directly, or it will stay in the queue. When an asynchronous invoke ends, a invoke in the head of the queue will be popped and executed, such that assures active asynchronous invoke amount no larger than restricted value.
When the queue length is larger than 1, Bagpipe object will fire its full
event, which delivers the queue length value. The value helps to assess business performance. For example:
bagpipe.on('full', function (length) {
console.warn(`Button system cannot deal on time, queue jam, current queue length is: ${length}`);
});
If queue length more than limit, you can set the refuse
option to decide continue in queue or refuse call. The refuse
default false
. If set as true
, the TooMuchAsyncCallError
exception will pass to callback directly:
var bagpipe = new BagPipe(10, {
refuse: true
});
If complete the async call is unexpected, the queue will not balanced. Set the timeout, let the callback executed with the BagpipeTimeoutError
exception:
var bagpipe = new BagPipe(10, {
timeout: 1000
});
- Ensure that the last parameter of the asynchronous invoke is callback.
- Listen to the
full
event, adding your business performance assessment. - Current asynchronous method has not supported context yet. Ensure that there is no
this
reference in asynchronous method. If there isthis
reference in asynchronous method, please usebind
pass into correct context. - Asynchronous invoke should process method to deal with timeout, it should ensure the invoke will return in a certain time no matter whether the business has been finished or not.
When you want to traverse file directories, asynchrony can ensure full
use of IO. You can invoke thousands of file reading easily. But, system file descriptors are limited. If disobedient, read this article again when occurring errors as follows.
Error: EMFILE, too many open files
Someone may consider dealing it with synchronous method. But, when synchronous, CPU and IO cannot be used concurrently, performance is an indefeasible index under certain condition. You can enjoy concurrent easily, as well as limit concurrent with Bagpipe.
var bagpipe = new Bagpipe(10);
var files = ['Here are many files'];
for (var i = 0; i < files.length; i++) {
// fs.readFile(files[i], 'utf-8', function (err, data) {
bagpipe.push(fs.readFile, files[i], 'utf-8', function (err, data) {
// won’t occur error because of too many file descriptors
// well done
});
}
Released under the license of MIT, welcome to enjoy open source.