You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Apr 22, 2023. It is now read-only.
We are trying to serve as much requests as possible, and write one file per request. With small payload works fine, but on high payload scenarios while the synchronous version still working fine, the asynchronous version (fs.writeFile) hangs. Node stops serving requests and leave most of the files with 0-size (most of them neither get created).
There is no error in v0.6.6
In v.0.4.7 we got:
{ stack: [Getter/Setter], arguments: undefined, type: undefined, message: 'EMFILE, Too many open files \'aw_1031\'', errno: 24, code: 'EMFILE', path: 'aw_1031' }
Same behaviour in Ubuntu(VM) and Mac Osx.
This is the example script we are currently running with:
ab -n 30000 -c 500 http://HOST:8000/
var http = require('http');
var fs = require('fs');
var util = require ('util');
var i=0;
function writeALot(req, res){
fs.writeFile("filetest"+i, "Just a try: "+i,
function(err){
if(err) console.log(util.inspect(err));
});
i++;
res.writeHead(200);
res.end();
}
http.createServer(writeALot).listen(8000);
How can we manage the max number of concurrent fd? Any advice?
Thanks in advance.
The text was updated successfully, but these errors were encountered:
Confirmed. It's a libuv bug and we're possibly also exhausting the libeio task queue.
You can avoid the worst of it by rewriting the request logic:
functionwriteALot(req,res){res.writeHead(200);fs.writeFile("filetest"+i,"Just a try: "+i,function(err){res.end();if(err)console.log(util.inspect(err));});i++;}
The problem with the server logic is that is should work that way. Answering before processing (if it is possible), it's some kind of oneway relaying. If it is not possible we will slow down things a little bit.
Is it an already reported bug? any ticket to follow?
We are trying to serve as much requests as possible, and write one file per request. With small payload works fine, but on high payload scenarios while the synchronous version still working fine, the asynchronous version (fs.writeFile) hangs. Node stops serving requests and leave most of the files with 0-size (most of them neither get created).
There is no error in v0.6.6
In v.0.4.7 we got:
{ stack: [Getter/Setter], arguments: undefined, type: undefined, message: 'EMFILE, Too many open files \'aw_1031\'', errno: 24, code: 'EMFILE', path: 'aw_1031' }
Same behaviour in Ubuntu(VM) and Mac Osx.
This is the example script we are currently running with:
ab -n 30000 -c 500 http://HOST:8000/
How can we manage the max number of concurrent fd? Any advice?
Thanks in advance.
The text was updated successfully, but these errors were encountered: