-
Notifications
You must be signed in to change notification settings - Fork 7.3k
net.js - possible EventEmitter memory leak detected #5108
Comments
How do you know that? More details would be appreciated. |
Unfortunately, I don't have any test cases prepared. It happens in this project https://github.com/archilimnadze/reverse-proxy-server . It's basically a one-file reverse proxy server receiving requests from the |
Try instrumenting the .on() and .once() methods of req, res, req.socket and res.socket and trace the 'end' events. What I suspect happens is that 'end' event listeners keep getting added, either by node or by your code. The internal listener that node adds is probably the tipping point. Example: var on_ = req.on;
req.on = function() {
console.log(arguments);
console.trace();
return on_.apply(this, arguments);
}; |
@archilimnadze A standalone node-only test is best. Failing that, a standalone not-node-only test is second-best. It's not as helpful to just say it happens when you use XYZ module, without pointing to a set of steps to actually make it happen with that module. |
I'll try to make a test case. Before that I know you can't do anything. Sorry. -------- Original Message -------- |
I'm closing it until I find it reproducible. |
Last week tried to update a project build on Express to node 10.1. However, ran into this bug as well. If page is reloaded twice quickly or several requests are made by clicking a button, the memory leak warning is triggered and node.js runs 100% cpu. Centos 6.4 I will try to gather more information and trace the error. Interestingly enough, I couldn't reproduce error on Windows running node.js with the same project. |
I am having a similar problem on node 10.0 (I'm using connect and socket.io). I don't get an out of memory exception, but I do experience 100% CPU usage. I don't have a test case, but my server application will reliably begin eating 100% CPU at some point, usually within 48 hours of launching it. I used the I am also using CentOS 6. Relevant portion of log:
EDIT: Searching around, it appears a few other people are having the same nondeterministic 100% CPU usage issue. If there's any more information I can provide, or if my issue is distinct and should be opened anew, please let me know. |
The fact that In order to make any progress on this, I need to reproduce it. Obviously a standalone node-only test is best, but even a sporadically failing test case that uses Express would be better than nothing. If you are seeing this on smartos, then a core dump would also be useful.
Have you tried on 0.10.3? There was an actual bug that got fixed. |
Experiencing same issue v0.10.3 on centos 6.0. Does not occur 0.9.4. Here's the resource usage from top: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND Process has not crashed, but server takes minutes to respond. Same behavior as calzoneman's description above. |
@razvan346 interesting, I'll try downgrading to 0.9.4 and seeing if that eliminates the problem. Which modules are you using and what kind of application is it? Perhaps if we can determine similarities we can narrow down a test case. |
Nope, still getting CPU usage issue on 0.9.4 |
I have isolated this issue locally to serving static files via connect.static('public'). More specifically this was only reproducible serving index.html. Turn off javascript to prevent other requests and hold ctrl-r or command-r to refresh rapidly until cpu goes to 99%. Also, @calzoneman you are right. It occurs in 0.9.4 as well. |
Reproducible in the same fashion described above using the buffet static serving module (https://github.com/carlos8f/node-buffet). This appears to be a node/centos issue rather than a module issue. |
If you have a server that's serving stuff, and you're hitting it as fast as possible, it's expected that it'll use all the available CPU to do its work. It's doing work. That's what CPUs are for. Are you getting it to print "Possible eventemitter leak detected", or not? Just because 2 modules perform badly, that doesn't mean that Node has a bug. It could well be that both modules are just broken, perhaps in similar ways. Using https://github.com/isaacs/st, I can't get this to happen. |
To clarify, once done hitting the server as fast as possible, the cpu usage will REMAIN at 100% until the node process is restarted. |
I just re-tested with the buffet module- the CPU remains at 100%, however it does not print Emitter Leak errors. Also, the server remains responsive, where with express static would not. |
As far as I know, the issue on my end is not triggered by someone mashing refresh. It's possible that @razvan346's test is simply triggering the problem much faster because of the high rate of requests, and it's also possible that the issues are distinct. I have confirmed that my issue (100% CPU usage indefinitely after being triggered by an undetermined stimulus) occurs in node 0.9.4 and 0.10.x, but does not occur in 0.8.23. I'm using express to serve static files and socket.io for websocket communication, and using the same version of these modules across node versions (previously I was using connect for file serving, same results across versions). I will try to test my software with isaacs/st for completeness |
On CentOS 6.4 I have the same problem. app.configure(function(){ app.use(function (req, res, next) { res.socket.on('_socketEnd',function() { var s = this; console.log('res.socket.on _socketEnd'); //See the problem if (s) s.destroy(); //Stop the loop }); next(); }); }); On my LAN it was hard to trigger the loop, because the download went to quick. Hope this helps solving the real problem. |
Interesting, I had a suspicion that it was being caused by a dirty disconnect. I've been using node 0.8.23 as well, and it's working fine for now but obviously I'd much rather have the option of using an up-to-date version. It's a bit late where I am, but tomorrow I'll try testing this on other distros to see if it's specific to CentOS or if it's a node/module problem. I don't think this is the same issue as the one in the original post of this thread; perhaps one of us should open a new Issue for this? |
I have confirmed the issue on both CentOS 6 and Debian 6, both using node 0.10.4. Here is a full test case based on @RonaldPannekoek's given code that demonstrates the issue: var fs = require('fs');
var app = require('express')();
app.configure(function() {
app.use(function(req, res, next) {
res.socket.on('_socketEnd', function() {
var s = this;
console.log('res.socket.on _socketEnd');
// if(s) s.destroy(); // Uncomment to try his workaround
});
next();
});
});
app.get('/', function(req, res) {
// Or some other huge file
fs.readFile('100mb.test', function(err, data) {
res.send(data+"");
});
});
app.listen(8080); Testing: Navigate to http://servername:8080, and close the tab before the file is fully loaded |
This seems to be related to #5298. |
With putting net.js to debug i can have this writing indefinitely even where all connections looks to be closed : https://gist.github.com/AnaelFavre/5274181 It seems that net.js tries to read a badly closed connection witch is now destroyed ? |
That |
I understand. Sorry for useless intervention ! |
I tested with this script which uses node only: var fs = require('fs');
var http = require('http');
var b = new Buffer(100 * 1024 * 1024);
b.fill('x');
fs.writeFileSync('100mb.test', b);
b = null;
var server = http.createServer(function(req, res) {
res.socket.on('_socketEnd', function() {
var s = this;
console.log('res.socket.on _socketEnd');
// if(s) s.destroy(); // Uncomment to try his workaround
});
fs.readFile('100mb.test', function(err, data) {
res.end(data + '');
});
});
server.listen(8080); Of course, it's dog slow, because it's not optimizing anything, and it's converting from a buffer to a string unnecessarily, and probably ought to be streaming to use less memory. But it finishes just fine, doesn't go into any kind of infinite loop, and seems to behave as expected. |
Getting the error with PubNub especially under poor network conditions. First I suspected it was due to event loop being congested (I was programming a microcontroller, so blocking the event loop is easier than a beefier web server) — though no matter how slow I open the sockets, yield things with setTimeout/setImmediate etc; I still got the error. node v0.10.32 on Tessel TM-00-04 (firmware updated on Feb 9th 2015) I don’t have diagnostic info to share right now; I will post when I have some. |
Still happening in 0.10.36 :S |
Still getting this on mysql pool connection - 0.10.28 (Different server than above) - CentOS "connection.on('error', function (err) {"
|
I'm getting this error using Grunt with Node v0.12.2. |
I think this should be closed. Its expected to get this warning when adding more than 11 event listeners to an emitter. You can get all the listeners from the EE and investigate what listeners were added, but unless there is evidence that node itself is leaking emitters, this is expected behaviour. Gather that evidence and reopen when if it can be found. |
I am still getting this error with Node 4.2.1 using JSDom 7.1.1 when trying to scrape a page every second: (node) warning: possible EventEmitter memory leak detected. 11 error listeners added. Use emitter.setMaxListeners() to increase limit.
Trace
at Socket.addListener (events.js:239:17)
at Socket.Readable.on (_stream_readable.js:665:33)
at Object.done (/Users/ara/dev/iteam/data-mining/streamers/scrape/inloggedcars.js:24:22)
at /Users/ara/dev/iteam/data-mining/streamers/node_modules/jsdom/lib/jsdom.js:271:18
at doNTCallback0 (node.js:417:9)
at process._tickCallback (node.js:346:13) |
This is an intermediate fix for an issue of accessing `TLSWrap` fields after the parent handle was destroyed. While `close` listener cleans up this field automatically, it can be done even earlier at the `TLSWrap.close` call. Proper fix is going to be submitted and landed after this one. Fix: nodejs#5108 PR-URL: nodejs/node#5168 Reviewed-By: Shigeki Ohtsu <ohtsu@iij.ad.jp>
This is an intermediate fix for an issue of accessing `TLSWrap` fields after the parent handle was destroyed. While `close` listener cleans up this field automatically, it can be done even earlier at the `TLSWrap.close` call. Proper fix is going to be submitted and landed after this one. Fix: nodejs#5108 PR-URL: nodejs/node#5168 Reviewed-By: Shigeki Ohtsu <ohtsu@iij.ad.jp>
I got the same issue on this code with strongloop(loopback)
|
+1 |
This is an intermediate fix for an issue of accessing `TLSWrap` fields after the parent handle was destroyed. While `close` listener cleans up this field automatically, it can be done even earlier at the `TLSWrap.close` call. Proper fix is going to be submitted and landed after this one. Fix: nodejs#5108 PR-URL: nodejs/node#5168 Reviewed-By: Shigeki Ohtsu <ohtsu@iij.ad.jp>
This has probably been mentioned multiple times in this thread, but because nobody bothers to read it, here's a comment from literally 5 comments above mine:
|
having the same issue with request (that uses the http core module ) |
I agree with @calzoneman, the fix in my case was that I had to call: |
I don't know exactly when this happens. But I know this is not caused by my code. This happens in Node v0.10.0. Line numbers have changed a little bit since then but the stack trace might be usefull:
The text was updated successfully, but these errors were encountered: