Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

Already on GitHub? Sign in to your account

net.js - possible EventEmitter memory leak detected #5108

Closed
aimnadze opened this Issue Mar 21, 2013 · 107 comments

Comments

Projects
None yet

I don't know exactly when this happens. But I know this is not caused by my code. This happens in Node v0.10.0. Line numbers have changed a little bit since then but the stack trace might be usefull:

(node) warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit.
Trace
    at Socket.EventEmitter.addListener (events.js:160:15)
    at Socket.Readable.on (_stream_readable.js:653:33)
    at Socket.EventEmitter.once (events.js:179:8)
    at TCP.onread (net.js:512:26)
FATAL ERROR: Evacuation Allocation failed - process out of memory
Owner

bnoordhuis commented Mar 21, 2013

But I know this is not caused by my code.

How do you know that? More details would be appreciated.

Unfortunately, I don't have any test cases prepared. It happens in this project https://github.com/archilimnadze/reverse-proxy-server . It's basically a one-file reverse proxy server receiving requests from the cluster module (using 4 cores in my case) and making proxy requests. Tell me what kind of more details can I provide and I'll do my best to get it.

Owner

bnoordhuis commented Mar 21, 2013

Try instrumenting the .on() and .once() methods of req, res, req.socket and res.socket and trace the 'end' events. What I suspect happens is that 'end' event listeners keep getting added, either by node or by your code. The internal listener that node adds is probably the tipping point.

Example:

var on_ = req.on;
req.on = function() {
  console.log(arguments);
  console.trace();
  return on_.apply(this, arguments);
};

isaacs commented Mar 22, 2013

@archilimnadze A standalone node-only test is best. Failing that, a standalone not-node-only test is second-best. It's not as helpful to just say it happens when you use XYZ module, without pointing to a set of steps to actually make it happen with that module.

I'll try to make a test case. Before that I know you can't do anything. Sorry.

-------- Original Message --------
Subject: Re: [node] net.js - possible EventEmitter memory leak detected
(#5108)
From: "Isaac Z. Schlueter" notifications@github.com
Date: Fri, March 22, 2013 9:47 am
To: joyent/node node@noreply.github.com
Cc: Archil Imnadze archil@imnadze.ge@archilimnadze A standalone node-only test is best. Failing that, a standalone not-node-only test is second-best. It's not as helpful to just say it happens when you use XYZ module, without pointing to a set of steps to actually make it happen with that module. —Reply to this email directly or view it on GitHub.

I'm closing it until I find it reproducible.

@aimnadze aimnadze closed this Mar 22, 2013

Last week tried to update a project build on Express to node 10.1. However, ran into this bug as well. If page is reloaded twice quickly or several requests are made by clicking a button, the memory leak warning is triggered and node.js runs 100% cpu.

Centos 6.4

I will try to gather more information and trace the error.

Interestingly enough, I couldn't reproduce error on Windows running node.js with the same project.

I am having a similar problem on node 10.0 (I'm using connect and socket.io). I don't get an out of memory exception, but I do experience 100% CPU usage. I don't have a test case, but my server application will reliably begin eating 100% CPU at some point, usually within 48 hours of launching it. I used the tick module to profile my application and the most heavy function was *onread net.js:464 (the line number has since changed, but observing the file history this particular function was not changed). My suspicion is that a client socket is ending up in a bad state and causing an infinite loop.

I am also using CentOS 6.

Relevant portion of log:

 [Bottom up (heavy) profile]:
  Note: percentage shows a share of a particular caller in the total
  amount of its parent calls.
  Callers occupying less than 2.0% are not shown.

   ticks parent  name
  6861788   82.1%  /lib64/libc-2.12.so

  669966    8.0%  /usr/local/bin/node
  56824    8.5%    LazyCompile: *EventEmitter.emit events.js:53
  56817  100.0%      LazyCompile: *onread net.js:464
  15685    2.3%    LazyCompile: ADD native runtime.js:163
  15682  100.0%      LazyCompile: *onread net.js:464

EDIT: Searching around, it appears a few other people are having the same nondeterministic 100% CPU usage issue. If there's any more information I can provide, or if my issue is distinct and should be opened anew, please let me know.

isaacs commented Apr 11, 2013

The fact that onread in net.js was a large portion of your time is actually pretty normal.

In order to make any progress on this, I need to reproduce it. Obviously a standalone node-only test is best, but even a sporadically failing test case that uses Express would be better than nothing. If you are seeing this on smartos, then a core dump would also be useful.

If page is reloaded twice quickly or several requests are made by clicking a button, the memory leak warning is triggered and node.js runs 100% cpu.

Have you tried on 0.10.3? There was an actual bug that got fixed.

suprraz commented Apr 12, 2013

Experiencing same issue v0.10.3 on centos 6.0. Does not occur 0.9.4. Here's the resource usage from top:

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
11066 root 20 0 831m 768m 6228 R 97.1 44.9 72:19.81 node

Process has not crashed, but server takes minutes to respond. Same behavior as calzoneman's description above.

@razvan346 interesting, I'll try downgrading to 0.9.4 and seeing if that eliminates the problem. Which modules are you using and what kind of application is it? Perhaps if we can determine similarities we can narrow down a test case.

Nope, still getting CPU usage issue on 0.9.4

suprraz commented Apr 17, 2013

I have isolated this issue locally to serving static files via connect.static('public'). More specifically this was only reproducible serving index.html. Turn off javascript to prevent other requests and hold ctrl-r or command-r to refresh rapidly until cpu goes to 99%.

Also, @calzoneman you are right. It occurs in 0.9.4 as well.

suprraz commented Apr 17, 2013

Reproducible in the same fashion described above using the buffet static serving module (https://github.com/carlos8f/node-buffet). This appears to be a node/centos issue rather than a module issue.

isaacs commented Apr 18, 2013

If you have a server that's serving stuff, and you're hitting it as fast as possible, it's expected that it'll use all the available CPU to do its work. It's doing work. That's what CPUs are for.

Are you getting it to print "Possible eventemitter leak detected", or not?

Just because 2 modules perform badly, that doesn't mean that Node has a bug. It could well be that both modules are just broken, perhaps in similar ways.

Using https://github.com/isaacs/st, I can't get this to happen.

suprraz commented Apr 18, 2013

To clarify, once done hitting the server as fast as possible, the cpu usage will REMAIN at 100% until the node process is restarted.

suprraz commented Apr 18, 2013

I just re-tested with the buffet module- the CPU remains at 100%, however it does not print Emitter Leak errors. Also, the server remains responsive, where with express static would not.
EDIT: Memory footprint does keep growing (observed to 15x original), indicating a leak

As far as I know, the issue on my end is not triggered by someone mashing refresh. It's possible that @razvan346's test is simply triggering the problem much faster because of the high rate of requests, and it's also possible that the issues are distinct.

I have confirmed that my issue (100% CPU usage indefinitely after being triggered by an undetermined stimulus) occurs in node 0.9.4 and 0.10.x, but does not occur in 0.8.23. I'm using express to serve static files and socket.io for websocket communication, and using the same version of these modules across node versions (previously I was using connect for file serving, same results across versions).

I will try to test my software with isaacs/st for completeness

On CentOS 6.4 I have the same problem.
To create the problem you have to cancel a download from a client-browser (open a page with a big video and close the tab before it is finished downloading).
Then the request ._socketEnd-event starts looping.
This is how I stop it on my test-machine in Express:

app.configure(function(){
    app.use(function (req, res, next) {
    res.socket.on('_socketEnd',function() {
      var s = this;
      console.log('res.socket.on _socketEnd'); //See the problem
      if (s) s.destroy(); //Stop the loop
    });
    next();
    });
});

On my LAN it was hard to trigger the loop, because the download went to quick.
On-line on a production machine it was easier to trigger the loop with a video.
On my production machines I went back to node 0.8.23, because I don’t know if my solution has side effects.

Hope this helps solving the real problem.

Interesting, I had a suspicion that it was being caused by a dirty disconnect. I've been using node 0.8.23 as well, and it's working fine for now but obviously I'd much rather have the option of using an up-to-date version.

It's a bit late where I am, but tomorrow I'll try testing this on other distros to see if it's specific to CentOS or if it's a node/module problem.

I don't think this is the same issue as the one in the original post of this thread; perhaps one of us should open a new Issue for this?

I have confirmed the issue on both CentOS 6 and Debian 6, both using node 0.10.4.

Here is a full test case based on @RonaldPannekoek's given code that demonstrates the issue:

var fs = require('fs');
var app = require('express')();
app.configure(function() {
  app.use(function(req, res, next) {
    res.socket.on('_socketEnd', function() {
      var s = this;
      console.log('res.socket.on _socketEnd');
      // if(s) s.destroy(); // Uncomment to try his workaround
    });
    next();
  });
});

app.get('/', function(req, res) {
  // Or some other huge file
  fs.readFile('100mb.test', function(err, data) {
    res.send(data+"");
  });
});

app.listen(8080);

Testing: Navigate to http://servername:8080, and close the tab before the file is fully loaded
Expected behaviour: 'res.socket.on _socketEnd' is printed once, and the socket is destroyed
Actual behaviour: 'res.socket.on _socketEnd' is printed repeatedly and CPU usage is locked at 100%

mtyaka commented Apr 23, 2013

This seems to be related to #5298.

Ifnot commented Apr 23, 2013

With putting net.js to debug i can have this writing indefinitely even where all connections looks to be closed : https://gist.github.com/AnaelFavre/5274181

It seems that net.js tries to read a badly closed connection witch is now destroyed ?
NET: 1485 onread EOF "undefined undefined NaN"

isaacs commented Apr 23, 2013

That onread EOF "undefined undefined NaN" is normal. It's just telling you that it got the EOF signal, and that the chunk, offset, and length are all unset (which is normal in the EOF case).

Ifnot commented Apr 23, 2013

I understand. Sorry for useless intervention !

isaacs commented Apr 23, 2013

I tested with this script which uses node only:

var fs = require('fs');
var http = require('http');

var b = new Buffer(100 * 1024 * 1024);
b.fill('x');
fs.writeFileSync('100mb.test', b);
b = null;

var server = http.createServer(function(req, res) {
  res.socket.on('_socketEnd', function() {

    var s = this;
    console.log('res.socket.on _socketEnd');

    // if(s) s.destroy(); // Uncomment to try his workaround
  });

  fs.readFile('100mb.test', function(err, data) {
    res.end(data + '');
  });
});

server.listen(8080);

Of course, it's dog slow, because it's not optimizing anything, and it's converting from a buffer to a string unnecessarily, and probably ought to be streaming to use less memory. But it finishes just fine, doesn't go into any kind of infinite loop, and seems to behave as expected.

@isaacs I just tested that exact code on Debian 6, node 0.10.5 (literally downloaded the 10.5 source, compiled, and installed it 10 minutes ago). Infinite loop. If there's other relevant information I can provide, I'll be happy to provide it, because it seems that even with a clean node install and no modules, we're getting different results.

EDIT: To be clear, the issue is triggered by closing the tab, refreshing, or similar actions which interrupt the connection while the data is being sent. Allowing the download to complete does not trigger the issue.

EDIT: The result of my testing was that stdout was spammed with res.socket.on _socketEnd and stderr displayed the "possible EventEmitter memory leak detected" error described by the original poster.

isaacs commented Apr 23, 2013

Oh, ok, I was able to get the warning about event handler leaking, because it's repeatedly adding a _socketEnd handler to the same socket over and over again, if I have wrk hit the server in such a way as to reuse sockets more than 10 times. However, if I don't add the handler to that event, then it doesn't print that warning. If I do, then I just get exactly the same number of _socketEnd lines printed as I made requests.

What makes you think that there's an "infinite loop" happening? What are your actual observations?

Tried removing the handler like you said. I still get the memory leak warning, and my observation is that the CPU usage of the node process rockets to 100% and remains there, and the memory usage grew steadily to 96% before I killed the process.

For clarity, the CPU usage and memory usage leaks were triggered

Exact code I'm using:

var fs = require('fs');
var http = require('http');

var b = new Buffer(100 * 1024 * 1024);
b.fill('x');
fs.writeFileSync('100mb.test', b);
b = null;

var server = http.createServer(function(req, res) {
  fs.readFile('100mb.test', function(err, data) {
    res.end(data + '');
  });
});

server.listen(8080);

When I was using the handler, I was getting a seemingly infinite number of "res.socket.on _socketEnd" messages (my terminal became unusably slow within seconds because of the rate of console output)

isaacs commented Apr 23, 2013

How are you making requests to the server?

I am navigating to http://servername:8080 in Firefox, waiting until I can see the response (a bunch of 'x's), and closing the tab.

EDIT: Same behavior when requesting from Chromium

cyzon@sb:~/tmp$ node -v
v0.10.5
cyzon@sb:~/tmp$ node eatcpu.js 1&>out 2&>err
^Ccyzon@sb:~/tmp$ tail err
(node) warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit.
Trace
    at Socket.EventEmitter.addListener (events.js:160:15)
    at Socket.Readable.on (_stream_readable.js:663:33)
    at Socket.EventEmitter.once (events.js:179:8)
    at TCP.onread (net.js:527:26)

isaacs commented Apr 23, 2013

Are you waiting until the response is finished before closing the tab?

isaacs commented Apr 23, 2013

Do you see the same behavior if you hit the url with curl?

I am not waiting until the response is finished. The behavior is demonstrated when the response is interrupted. I will try curl and edit the results in.

EDIT: Same result with curl, I killed curl after the first megabyte of data and the server is now at 100%CPU and nearly 100%memory

I tried letting the server run to see if it would recover, but it leaks until the process is killed by the system for using too much memory.

Issue definitely occurs on Debian 6 and CentOS 6 with node 0.10.5. I was not able to reproduce it on my local Arch installation. I don't suppose OpenVZ could have anything to do with the problem?

The issue does not occur on any of my systems when using node 0.8.23.

Same issue on CentOS 6.2 x86_64 with node 0.10.4 and Express. The problem cannot be reproduced with node 0.10.4 on OSX.

It happens when holding keys in Chrome to force refresh the page very quickly. After stopping refreshing the page, the memory used keeps growing and CPU usage keeps above 90%.

(node) warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit.
Trace
    at Socket.EventEmitter.addListener (events.js:160:15)
    at Socket.Readable.on (_stream_readable.js:653:33)
    at Socket.EventEmitter.once (events.js:179:8)
    at TCP.onread (net.js:527:26)

Ifnot commented Apr 24, 2013

I confirm that the problem is not present in node 0.8.23 : my app working well with this version.

@calzoneman : My Debian 6 is running inside a OpenVZ container too.

Does anyone have fired the problem outside of an OpenVZ container ?

@anaelfavre @calzoneman The one that I used facing this problem is also in the OpenVZ container.

Ifnot commented Apr 24, 2013

@mrhooray interesting, unfortunately i could not get a linux based server without container isolation in the internet for testing.

Owner

bnoordhuis commented Apr 24, 2013

@calzoneman In your example it's expected that memory and CPU usage are spiking. You're using fs.readFile() which tells node to read the entire file as fast as it can, regardless of network back-pressure, aborted connections, etc.

var server = http.createServer(function(req, res) {
  fs.createReadStream('/dev/zero').pipe(res);
});
server.listen(8080);

Your example rewritten to use a ReadStream runs in constant memory, even with files of infinite length, and cleans up after itself when the connection is aborted.

The above is a long-winded way of saying that I'm not convinced so far that there is a bug in node.js core.

Ifnot commented Apr 24, 2013

@bnoordhuis I do not agree with you. Indeed, reading the file should spike CPU usage. But I my case, have the same problem by streaming html pages like an HTTP Proxy. The CPU stay at 100% and memory keep growing event if i stop all requests.

EDIT : I tested your example, and if i let my browser download datas the CPU does not grow up than 5%. But when i leave (stopping the running download), nodejs throw the warning and start to fill all the memory with a 100% CPU usage.

EDIT : I also tried your example with 0.8.23 and it is working fine : When disconnected, the connection is properly closed and node returns back to 0% CPU.

mtyaka commented Apr 24, 2013

I am running node on CentOS 6.3 straight on hardware (no OpenVZ) and also see this bug.

Owner

bnoordhuis commented Apr 24, 2013

@anaelfavre You can disagree but that doesn't change the fact that I'm right. I'm only half tongue in cheek here, the test case above is simply not how you should serve files. The fact that it worked in v0.8 is probably by accident; fs.readFile() and friends became significantly faster in v0.10.

If you have a test case that uses proper streams and exhibits the behavior you mentioned, please post it.

Ifnot commented Apr 24, 2013

@bnoordhuis I agree with you for file serving. The fact is i tried your example and it throw the exactly the same error. It show that even the file serving method is questionable, there are surely a bug in node.

mtyaka commented Apr 24, 2013

@bnoordhuis I can reproduce it on CentOS/Node v0.10.5, using pipe with a 1mb file:

var fs = require('fs');
var http = require('http');

var b = new Buffer(1 * 1024 * 1024);
b.fill('x');
fs.writeFileSync('1mb.test', b);

var server = http.createServer(function(req, res) {
  fs.createReadStream('1mb.test').pipe(res);
});

server.listen(8080);

Opening http://servername:8080/ in a browser and hitting refresh before the first request finishes loading triggers the bug. Memory usage goes up to 1.5G and continues rising while CPU usage remains at 100% until I kill the node process.

I am also able to trigger the bug by running the below script from another machine:

var http = require('http');

for (var i=1; i<10; i++) {
  http.get('http://servername:8080').on('error', function(){});
}

setTimeout(process.exit, 100);

Running this script on the same machine that's serving the 1mb file doesn't seem to trigger the bug.

Ifnot commented Apr 24, 2013

@mtyaka "Running this script on the same machine that's serving the 1mb file doesn't seem to trigger the bug."

I suppose that the file is already sent before process.exit because of high localhost speed. Try to reduce timeout duration for local.

mtyaka commented Apr 24, 2013

Try to reduce timeout duration for local.

I tried various combinations of timeout duration and number of requests. I also tried to increase the 1mb file size to 100mb, but was never able to trigger the bug from the local machine.

Owner

bnoordhuis commented Apr 24, 2013

@mtyaka Thanks for the test but I can't reproduce it, neither on localhost or over a LAN. That's with v0.10.5 running on 64 bits Linux 3.9-rc8. The server uses modest CPU time when the clients are connected and goes idle again when they disconnect.

What do you see with strace -fp $(pgrep node) when the server starts busy-looping?

mtyaka commented Apr 24, 2013

@bnoordhuis This is the strace output: https://gist.github.com/mtyaka/5452478/raw/d2b15476c637417233e877414c1550ad7ac1abaa/strace.out

The server is running CentOS 6.3, kernel version 2.6.32-279.14.1.el6.x86_64

Owner

bnoordhuis commented Apr 24, 2013

Hm, looks like a genuine bug. The socket has EOF'd but somehow node.js or libuv isn't closing the handle. Strange that I can't reproduce but thanks for posting the log, I'll look into it.

Owner

bnoordhuis commented Apr 24, 2013

@mtyaka Sanity check. Can you apply the patch below, recompile and retest? I'm fairly certain by now that the bug is in node.js rather than libuv but it would be nice to have that confirmed.

diff --git a/deps/uv/src/unix/stream.c b/deps/uv/src/unix/stream.c
index bc9d4f1..3cbd846 100644
--- a/deps/uv/src/unix/stream.c
+++ b/deps/uv/src/unix/stream.c
@@ -1008,6 +1008,7 @@ static void uv__read(uv_stream_t* stream) {
         uv__handle_stop(stream);
       uv__set_artificial_error(stream->loop, UV_EOF);
       INVOKE_READ_CB(stream, -1, buf, UV_UNKNOWN_HANDLE);
+      assert(stream->flags & (UV_CLOSING | UV_CLOSED));
       return;
     } else {
       /* Successful read */

@isaacs isaacs reopened this Apr 25, 2013

isaacs commented Apr 25, 2013

Reopening, since there is a genuine bug here (or, somewhere, at least.)

mtyaka commented Apr 25, 2013

@bnoordhuis I retested with your patch; the assertion fails when I trigger the bug:

node: ../deps/uv/src/unix/stream.c:1011: uv__read: Assertion `stream->flags & (UV_CLOSING | UV_CLOSED)' failed.
Aborted (core dumped)
Owner

bnoordhuis commented Apr 29, 2013

@mtyaka Thanks for testing. I'm reasonably sure by now it's an artifact of how node.js (mis)implements half-open connections. What happens when you set allowHalfOpen to false? That is:

var server = http.createServer(...);
server.allowHalfOpen = false;
server.listen(...);

mtyaka commented Apr 30, 2013

@bnoordhuis Thanks for looking into this.

With allowHalfOpen set to false, I can still trigger the bug. The difference is that instead of only printing a single EventEmitter memory leak warning, it now prints two:

(node) warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit.
Trace
    at Socket.EventEmitter.addListener (events.js:160:15)
    at Socket.Readable.on (_stream_readable.js:663:33)
    at Socket.EventEmitter.once (events.js:179:8)
    at Socket.destroySoon (net.js:413:10)
    at Socket.onSocketEnd (net.js:261:10)
    at Socket.EventEmitter.emit (events.js:92:17)
    at TCP.onread (net.js:535:10)
(node) warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit.
Trace
    at Socket.EventEmitter.addListener (events.js:160:15)
    at Socket.Readable.on (_stream_readable.js:663:33)
    at Socket.EventEmitter.once (events.js:179:8)
    at TCP.onread (net.js:527:26)

Here is the strace output: https://gist.github.com/mtyaka/5487243/raw/4b5c0b0383f2c7805e1c15c8762ecf3ef9daf6f5/allowHalfOpen-false-strace.out

If I compile node with the assert(stream->flags & (UV_CLOSING | UV_CLOSED)); patch from above, the assertion still fails when the bug is triggered.

This was referenced May 6, 2013

zumoshi commented May 11, 2013

i also got the error on CentOS 6.4 with a TCP proxy (using only net without express or http) and also using a simple express app
it worked normally with v10.5 for a few minuets then (after closing some connections before compeletion) i got the eventemitter warning and it maxed out ram and cpu and remains there after connections were closed untill i killed its process
downgrading to v8.9 fixed the issue

@calzoneman calzoneman referenced this issue in calzoneman/sync May 17, 2013

Closed

memory leak #131

Tarang commented May 19, 2013

I also get this in CentOS 6.4 & have tested with node 10.3-10.7. It occurs using node-http-proxy when browser clients attempt to connect via websockets. It does not occur with previous versions of Node JS on other platforms (e.g Ubuntu)

suprraz commented May 19, 2013

Not sure how this error is not top priority. Nodejs is unusable on CentOS. I have been forced to migrate to Ubuntu due to this alone.

Owner

bnoordhuis commented May 19, 2013

Continues in #5504. The issue that @mtyaka reported looks like a genuine bug. The other reports are either invalid or involve express, socket.io and/or http-proxy which means it's out of scope for node.js core.

@bnoordhuis bnoordhuis closed this May 19, 2013

Just want to quickly contribute our experience which has similarities to this bug report.

We were experiencing random 100% CPU spikes, and CPU usage getting stuck at 100% like many contributors on this page.

Some profiling and further googling brought me to this thread. We use Centos 6.x like some other others, and with node v0.10.7.

Reading this I decided to upgrade and try v 0.11.2. Same problem continued.

Finally I downgraded to 0.8.23 like suggested above and the problem is gone.

co-sche commented May 22, 2013

It reproduced. (Node v0.10.7 on CentOS 6.3)

server.js

var net = require('net');
var content = new Buffer(1 * 1024 * 1024);
content.fill('#');
net.createServer(function(socket) {
    socket.write(content);
}).listen(3000);

client.js

var net = require('net');
var client = net.connect({
    host: 'localhost',
    port: 3000
}, function() {
    client.destroy();
});

co-sche commented May 22, 2013

It seems to happen when server received FIN or RST unexpectedly before ACK from client while sending data.
If you cannot reproduce this, please try to increase the content size.

Btw, should I discuss this on #5504 instead?

@co-sche I think the intention of closing this issue was to continue it in #5504.

co-sche commented May 22, 2013

@calzoneman Thanks. I'll reprint comments to #5504.

goferito commented Jun 6, 2013

I get the same error on Ubuntu 12.04. Node 0.10.10. But just trying to gzip files:

var files = fs.readdirSync(folderWithManyFiles)
files.map(function(file){
  var inp = fs.createReadStream(file)
  var out = fs.createWriteStream(file+'.gz')
  inp.pipe(gzip).pipe(out)
})
(node) warning: possible EventEmitter memory leak detected. 11 listeners added. 
Use emitter.setMaxListeners() to increase limit.
Trace
    at Gzip.EventEmitter.addListener (events.js:175:15)
    at Gzip.Stream.pipe (stream.js:44:10)
[...]

@goferito Discussion of the issue moved to #5504 and has been fixed for the next release of node: joyent#5504 (comment)

Reproducible in v0.10.2 on Raspberrypi (wheezy)

Server:

var http = require('http');

http.createServer(function(req, resp) {
    resp.writeHead(200, {"Content-Type": "text/plain"});
    resp.write("Hello World");
    resp.end();
}).listen(8124);

Client:

ab -n 1000 -c 100 http://<host>:8124/ 
Owner

bnoordhuis commented Jun 10, 2013

@arunwizz With all due respect but v0.10.2 is ancient history by now. Always test with the latest release when reporting bugs.

arunkjn commented Jul 12, 2013

I encountered this as well.
Server - CentOS release 6.2 (Final) - 4 core CPU - 16GB ram.
This issue comes sometimes when using http-proxy@0.10.2 with node v0.10.7
I am using it to proxy both websocket and http traffic.
The CPU spikes at 25% when I get this. I think it is because it is using only 1 core.
Restarting the application fixes the issue. It is random and occurs in about 2-3 days of app runtime.

@arunkjn Try updating to node 0.10.11 or above. Also, checkout this page: joyent#5504 (comment)

keichii commented Jul 26, 2013

ive tried the compiled github version of trunk but still same

isaacs commented Jul 27, 2013

@arunkjn @keichii I believe that you are encountering a bug, but there is simply no way it's the same bug as this one, even if it has the same words in the error message. Can you please open a new issue with your details about how to encounter it?

being a new node-js user, my first node-js app for serving static files via http using createReadStream
is producing the bug...
I've put more details on stack overflow:
http://stackoverflow.com/questions/17971641/possible-eventemitter-memory-leak-detected-nodejs-v-0-10-4-on-centos

would gladly provide more details about my machine, here is what i have:
CentOS release 5.3 (Final)
4 X Intel(R) Xeon(R) CPU - 2.4Ghz

     2026 M total memory
     1974 M used memory
      272 M active memory
     1659 M inactive memory
       52 M free memory
        6 M buffer memory

almost forgot the most important.. node.js version: v0.10.4
:)

Owner

bnoordhuis commented Jul 31, 2013

@syberkitten Upgrade. You're 11 bug fix releases behind, the latest stable is v0.10.15.

thanks, will do and report.

@arunkjn arunkjn referenced this issue in pocha/terminal-codelearn Aug 23, 2013

Open

The server starts taking 100% #7

yesbird commented Sep 24, 2013

Got this error on version 0.10.13

Setting:

var server = http.createServer(app);
server.setMaxListeners(0); // Unlimited

This did not helps

Upgrading to 0.10.18 - helps !
Thanks for suggestion.

Summary: node has good potential, but still not stable for serious things - need more work, we will wait.

n3m6 commented Nov 30, 2013

This error repeats on version 0.10.22, latest stable, while running mocha tests with zombie.js.

I had a several test files with the following statements:

process.on('uncaughtException', function (err) {
  console.log('UNCAUGHT EXCEPTION');
  console.log(err.stack || err.message);
});

I commented out all of them, on all files, and the error went away.

+1
Still there in v0.10.24

Owner

indutny commented Jan 20, 2014

Surely it is still there, as we still do have this check. What exactly is happening, and why do you think it should not happen?

@schlingel @indutny If you are experiencing the same symptoms, perhaps you should come up with a test case?

This issue was closed over half a year ago in #5504 and node was patched to resolve the bug we were experiencing on CentOS kernels (demonstrated by this test case). This test case clearly fails before the patch and passes afterwards on my machine, so I find it difficult to believe that it's the same bug.

Owner

indutny commented Jan 20, 2014

@schlingel I'm not experiencing any issues, and yeah @schlingel I need a test case in order to help you with a problem.

Is this possibly the same bug. Got these for time to time

(node) warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increas
e limit.
Trace
    at Socket.EventEmitter.addListener (events.js:160:15)
    at Socket.Readable.on (_stream_readable.js:689:33)
    at Socket.EventEmitter.once (events.js:185:8)
    at Request.onResponse (D:\Dropbox\projects\domains2\node_modules\request\request.js:713:25)
    at ClientRequest.g (events.js:180:16)
    at ClientRequest.EventEmitter.emit (events.js:95:17)
    at HTTPParser.parserOnIncomingClient [as onIncoming] (http.js:1688:21)
    at HTTPParser.parserOnHeadersComplete [as onHeadersComplete] (http.js:121:23)
    at Socket.socketOnData [as ondata] (http.js:1583:20)
    at TCP.onread (net.js:527:27)

And after 5 hours

FATAL ERROR: CALL_AND_RETRY_2 Allocation failed - process out of memory

node version is 0.10.26 and os is windows 8, 64bit

joscha commented Apr 24, 2014

0.10.26 on OSX also has this problem with gulp and many watchers.

This might be related to and issue I found in the libuv tcp echo server. joyent/libuv#1249

I just experienced this error on Ubuntu 14

I'm getting it periodically on v0.10.26, Win8

shanebo commented Jun 5, 2014

I'm also getting this error on Node v 0.10.22 on AppFog / AWS:

(node) warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit.

@trevnorris trevnorris reopened this Jun 5, 2014

So my last two posts (which I've deleted but some may have received emails about them) had bad tests. Here's the correct one:

var net = require('net');
var cntr = 0;
var c;

setImmediate(function() {
  process._rawDebug('cntr: ' + cntr);
});

function writeData() {
  if (!(++cntr < 1e7 && c.write('hi', writeData)))
    c.destroy();
}

c = net.connect('8000', writeData);

In v0.10 this will cause your machine to explode in fire. Though everyone should know the issue has been fixed in v0.11.

The reason is because, as you'll see if you run the test in master, that cntr == 1e7 in setImmediate(). Basically, because the data can be immediately written out it immediately queues the callback to be called and never proceeds past the uv__io_poll() phase of the event loop.

Though there's a difference between your issue and this example. That is, you're reading data. Not writing it. But in the same manner, if the data is able to be read quickly enough (usually this happens when reading many tiny chunks of memory) and Node returns that more data can be read then the callback is immediately queued to be executed. This never allows resources to be cleaned up.

I'll attempt to write up a quick example for the read case now.

shanebo commented Jul 2, 2014

Until v0.11 is public, is there anything that can be done to resolve this issue?

@hugohil hugohil referenced this issue in soixantecircuits/watchy Oct 9, 2014

Open

Possibility of a memory leak #1

@evenfrost @bholben are you seeing this in net? If not, it may not be the same issue.

@chrisdickinson you're right, sorry for messing it up.

bholben commented Nov 16, 2014

No. I'm not working in the net module. I guess I'm in the wrong place.

On Fri Nov 14 2014 at 4:07:43 PM Chris Dickinson notifications@github.com
wrote:

@evenfrost https://github.com/evenfrost @bholben
https://github.com/bholben are you seeing this in net? If not, it may
not be the same issue.


Reply to this email directly or view it on GitHub
joyent#5108 (comment).

I am getting this error in v11.13 during npm inti -

(node) warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit.
Trace
    at ReadStream.EventEmitter.addListener (events.js:179:15)
    at ReadStream.Readable.on (_stream_readable.js:667:33)
    at new Interface (readline.js:124:11)
    at Object.exports.createInterface (readline.js:38:10)
    at read (/usr/local/lib/node_modules/npm/node_modules/read/lib/read.js:45:23)
    at /usr/local/lib/node_modules/npm/node_modules/init-package-json/init-package-json.js:95:9
    at final (/usr/local/lib/node_modules/npm/node_modules/read-package-json/read-json.js:349:17)
    at then (/usr/local/lib/node_modules/npm/node_modules/read-package-json/read-json.js:126:33)
    at /usr/local/lib/node_modules/npm/node_modules/read-package-json/read-json.js:316:48
    at fs.js:228:20

Still getting this error on V0.13.0-pre...

(node) warning: possible EventEmitter memory leak detected. 11 error listeners added. Use emitter.setMaxListeners() to increase limit.
Trace
    at PoolConnection.addListener (events.js:179:15)
    at /srv/nodeServer/app/rest-api.js:48:14
    at Ping.onPing [as _callback] (/srv/nodeServer/app/node_modules/mysql/lib/Pool.js:94:5)
    at Ping.Sequence.end (/srv/nodeServer/app/node_modules/mysql/lib/protocol/sequences/Sequence.js:96:24)
    at Ping.Sequence.OkPacket (/srv/nodeServer/app/node_modules/mysql/lib/protocol/sequences/Sequence.js:105:8)
    at Protocol._parsePacket (/srv/nodeServer/app/node_modules/mysql/lib/protocol/Protocol.js:271:23)
    at Parser.write (/srv/nodeServer/app/node_modules/mysql/lib/protocol/Parser.js:77:12)
    at Protocol.write (/srv/nodeServer/app/node_modules/mysql/lib/protocol/Protocol.js:39:16)
    at Socket.<anonymous> (/srv/nodeServer/app/node_modules/mysql/lib/Connection.js:82:28)
    at Socket.emit (events.js:107:17)

v0lkan commented Feb 10, 2015

Getting the error with PubNub especially under poor network conditions.

First I suspected it was due to event loop being congested (I was programming a microcontroller, so blocking the event loop is easier than a beefier web server) — though no matter how slow I open the sockets, yield things with setTimeout/setImmediate etc; I still got the error.

node v0.10.32 on Tessel TM-00-04 (firmware updated on Feb 9th 2015)

I don’t have diagnostic info to share right now; I will post when I have some.

Still happening in 0.10.36 :S

@trevnorris trevnorris self-assigned this Mar 6, 2015

Still getting this on mysql pool connection - 0.10.28 (Different server than above) - CentOS

"connection.on('error', function (err) {"

(node) warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit.
Trace
at PoolConnection.EventEmitter.addListener (events.js:160:15)
at .../nodejs/nodeServer/app/rest-api.js:47:15
at Ping.onPing [as _callback] (.../app/node_modules/mysql/lib/Pool.js:94:5)
at Ping.Sequence.end (...app/node_modules/mysql/lib/protocol/sequences/Sequence.js:96:24)
at Ping.Sequence.OkPacket (.../app/node_modules/mysql/lib/protocol/sequences/Sequence.js:105:8)
at Protocol._parsePacket (.../app/node_modules/mysql/lib/protocol/Protocol.js:271:23)
at Parser.write (/.../app/node_modules/mysql/lib/protocol/Parser.js:77:12)
at Protocol.write (...app/node_modules/mysql/lib/protocol/Protocol.js:39:16)
at Socket.<anonymous> (/.../app/node_modules/mysql/lib/Connection.js:82:28)
at Socket.EventEmitter.emit (events.js:95:17)

@zepheiryan zepheiryan referenced this issue in zepheira/bibframe-scribe May 29, 2015

Closed

Server-side persistent store #16

mz3 commented Jun 16, 2015

I'm getting this error using Grunt with Node v0.12.2.

Member

sam-github commented Jun 18, 2015

I think this should be closed. Its expected to get this warning when adding more than 11 event listeners to an emitter. You can get all the listeners from the EE and investigate what listeners were added, but unless there is evidence that node itself is leaking emitters, this is expected behaviour. Gather that evidence and reopen when if it can be found.

Owner

jasnell commented Jun 22, 2015

@jasnell jasnell closed this Jun 22, 2015

@seanpdoyle seanpdoyle referenced this issue in houndci/remark Nov 5, 2015

Merged

Run `node@4.1.1` on Heroku #3

@seanpdoyle seanpdoyle added a commit to houndci/remark that referenced this issue Nov 5, 2015

@seanpdoyle seanpdoyle Run `node@4.1.1` on Heroku
nodejs/node-v0.x-archive#5108

Given the appearance of:

```
(node) warning: possible EventEmitter memory leak detected. 12 end
listeners added. Use emitter.setMaxListeners() to increase limit.
```

in the logs, we should upgrade and lock our Node engine to a more stable
branch.

Luckily, [Heroku supports releases in the 4.x series][heroku].

Also upgrades to the latest version of `node-resque`, which could
contain a fix for [taskrabbit/node-resque#83][#83].

[heroku]: https://devcenter.heroku.com/articles/nodejs-support#specifying-a-node-js-version
[#83]: taskrabbit/node-resque#83
f769efc

@didodido85 didodido85 referenced this issue in fluent/fluent-logger-node Dec 3, 2015

Closed

EventEmitter memory leak #43

ambodi commented Dec 4, 2015

I am still getting this error with Node 4.2.1 using JSDom 7.1.1 when trying to scrape a page every second:

(node) warning: possible EventEmitter memory leak detected. 11 error listeners added. Use emitter.setMaxListeners() to increase limit.
Trace
    at Socket.addListener (events.js:239:17)
    at Socket.Readable.on (_stream_readable.js:665:33)
    at Object.done (/Users/ara/dev/iteam/data-mining/streamers/scrape/inloggedcars.js:24:22)
    at /Users/ara/dev/iteam/data-mining/streamers/node_modules/jsdom/lib/jsdom.js:271:18
    at doNTCallback0 (node.js:417:9)
    at process._tickCallback (node.js:346:13)

@richardlau richardlau pushed a commit to ibmruntimes/node that referenced this issue Feb 18, 2016

@indutny @rvagg indutny + rvagg tls: nullify `.ssl` on handle close
This is an intermediate fix for an issue of accessing `TLSWrap` fields
after the parent handle was destroyed. While `close` listener cleans up
this field automatically, it can be done even earlier at the
`TLSWrap.close` call.

Proper fix is going to be submitted and landed after this one.

Fix: #5108
PR-URL: nodejs/node#5168
Reviewed-By: Shigeki Ohtsu <ohtsu@iij.ad.jp>
aed04b8

@joaocgreis joaocgreis pushed a commit to janeasystems/node-v0.x-archive that referenced this issue Feb 19, 2016

@indutny indutny tls: nullify `.ssl` on handle close
This is an intermediate Fix for an issue of accessing `TLSWrap` fields
after the parent handle was destroyed. While `close` listener cleans up
this field automatically, it can be done even earlier at the
`TLSWrap.close` call.

Proper fix is going to be submitted and landed after this one.

Fix: #5108
PR-URL: nodejs/node#5168
Reviewed-By: Shigeki Ohtsu <ohtsu@iij.ad.jp>
aa05269

@richardlau richardlau pushed a commit to ibmruntimes/node that referenced this issue Mar 2, 2016

@indutny @MylesBorins indutny + MylesBorins tls: nullify `.ssl` on handle close
This is an intermediate Fix for an issue of accessing `TLSWrap` fields
after the parent handle was destroyed. While `close` listener cleans up
this field automatically, it can be done even earlier at the
`TLSWrap.close` call.

Proper fix is going to be submitted and landed after this one.

Fix: #5108
PR-URL: nodejs/node#5168
Reviewed-By: Shigeki Ohtsu <ohtsu@iij.ad.jp>
5c49604

I got the same issue on this code with strongloop(loopback)

module.exports = function (app) {
    var dataSourceDS1 = app.dataSources.DS1;
    var dataSourceDS2 = app.dataSources.DS2;
    var ReLocationTbl = dataSourceDS1.models["ReLocationTbl"]
    var MmLocationTbl = dataSourceDS2.models["MmLocationTbl"]
    MmLocationTbl.find(function (err, data) {
            if (!err) {
                console.log(data.length)
               data.forEach(function(location){
                   console.log(location)
                     ReLocationTbl.create(
                        location,
                        function (err, obj) {
                            if (!err) {
                                console.log("Success posting "+ location)
                            } else
                                console.log("Failed  posting "+ location)
                            console.log(err)
                        })
               }) 

            }
        })
}

+1

@richardlau richardlau pushed a commit to ibmruntimes/node that referenced this issue Mar 9, 2016

@indutny @MylesBorins indutny + MylesBorins tls: nullify `.ssl` on handle close
This is an intermediate Fix for an issue of accessing `TLSWrap` fields
after the parent handle was destroyed. While `close` listener cleans up
this field automatically, it can be done even earlier at the
`TLSWrap.close` call.

Proper fix is going to be submitted and landed after this one.

Fix: #5108
PR-URL: nodejs/node#5168
Reviewed-By: Shigeki Ohtsu <ohtsu@iij.ad.jp>
f71be24

This has probably been mentioned multiple times in this thread, but because nobody bothers to read it, here's a comment from literally 5 comments above mine:

I think this should be closed. Its expected to get this warning when adding more than 11 event listeners to an emitter. You can get all the listeners from the EE and investigate what listeners were added, but unless there is evidence that node itself is leaking emitters, this is expected behaviour. Gather that evidence and reopen when if it can be found.

tuwid commented Apr 12, 2016

having the same issue with request (that uses the http core module )

I agree with @calzoneman, the fix in my case was that I had to call:
someEmitterObj.removeListener('eventName', func);.

doc reference

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment