Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP

Loading…

EventEmitter memory leak on node v0.3.5, with possible fix #21

Closed
thegreatape opened this Issue · 18 comments

4 participants

@thegreatape

I created a very simple http-proxy instance for some testing: https://gist.github.com/790342

Running on node v0.3.5, this immediately returns the warning:

(node) warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit.
Trace
at Pool.<anonymous> (events.js:101:17)
at Object.proxyRequest (/usr/local/lib/node/.npm/http-proxy/0.3.1/package/lib/node-http-proxy.js:185:7)
at Server.<anonymous> (/usr/local/lib/node/.npm/http-proxy/0.3.1/package/lib/node-http-proxy.js:91:13)
at Server.emit (events.js:45:17)
at HTTPParser.onIncoming (http.js:862:12)
at HTTPParser.onHeadersComplete (http.js:85:31)
at Socket.ondata (http.js:787:22)
at Socket._onReadable (net.js:623:27)
at IOWatcher.onReadable [as callback] (net.js:156:10)

when an http request is made. Under load, it appears this is indeed leaking memory. After about 5 minutes with Siege hitting the proxy with 400 concurrent requests, it dies with:
FATAL ERROR: CALL_AND_RETRY_2 Allocation failed - process out of memory

However, removing the (empty?) pool error handler on line 184 of node-http-proxy.js appears to fix this. Should this be set once upon pool creation, rather than when the pool is retrieved for each request?

@indexzero
Owner

We've been seeing this memory leak in production but hadn't been able to track it down yet.

We're running node 0.3.1 (for at least minimal https support), so that would explain where we don't see it.

Thanks for the heads up and I'll investigate and get back to you.

@thegreatape

Interesting. I ran the same tests using node v0.3.1, with the same results - with the pool error handler in place, memory leaks occur. Without the error handler, it runs like champ under load. :-)

@indexzero
Owner

Nice. Will try siege myself, I'd only been using ab up till now, but this looks wayyy better.

Thanks again.

@joelklabo

I'm having the same issue on 0.3.7, did anyone solve this?

@indexzero
Owner

All,

We will be dropping pool when we upgrade to 0.4.x in the next month or so because node.js > 0.3.6 does connection pooling for us.

In the meantime, let me know how the removal of the pool error handler works in prod. I don't think it's the cause and not handling it can cause uncaught exceptions under high load.

@davidmoshal

Hi, what's the status of this bug?.

I'm getting this error, on node 0.4:

(node) warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit.
Trace:
at Pool. (events.js:101:17)
at Object.proxyRequest (/usr/local/lib/node/.npm/http-proxy/0.3.1/package/lib/node-http-proxy.js:185:7)
at Server. (/Users/davem/Petromatch/repository/eclipse/petromatch-java/node/faye-server.js:12:3)
at Server. (/usr/local/lib/node/.npm/faye/0.5.5/package/faye-node.js:1952:22)
at Server.emit (events.js:45:17)
at HTTPParser.onIncoming (http.js:1078:12)
at HTTPParser.onHeadersComplete (http.js:87:31)
at Socket.ondata (http.js:977:22)
at Socket._onReadable (net.js:654:27)
at IOWatcher.onReadable as callback

@davidmoshal

oh, and node is definitely leaking.

@indexzero
Owner

Working on a update for node 0.4.0. A lot of this has to do with the pool dependency which we can remove using the http.Agent API in 0.4.0

@davidmoshal

thanks, rough eta?
node now at 760 Mb after 6 hours of very light dev testing (couple of connections)
ie: I wouldn't feel comfortable moving to production.

@davidmoshal

ps: I also posted this on the node.js google group, and had some interesting replies:

https://groups.google.com/d/topic/nodejs/TbHMjwMdRRY/discussion

@davidmoshal

note the following comment regarding the code I posted, which is based on the example in the node.js readme:

https://groups.google.com/d/msg/nodejs/TbHMjwMdRRY/eoDRenX0UCEJ

is it true that every request creates a new proxy which isn't garbage collected?

@indexzero
Owner

It is true that every request creates a new proxy. It is false that it is not garbage collected.

@indexzero
Owner

I replied to the thread here: https://groups.google.com/forum/#!msg/nodejs/TbHMjwMdRRY/eoDRenX0UCEJ

Will try to bang this out in the next week or so.

@davidmoshal

I replaced this (which I found on your site):

var server = http.createServer (function (req, res) {

new httpProxy
.HttpProxy (req, res)
.proxyRequest (8080, 'localhost', req, res)

})

with:

var server = httpProxy.createServer (8080, 'localhost')

Functionality is the same, and the leak does seem to be less, but will know later today.

@davidmoshal

I ran node-inspector on my code, with v8-profiling.
As far as I can tell, HttpProxy is not being GC'd

Every request increments HttpProxy, ClientRequest and ServerResponse counts.
These counts never decrease, even if the system is left alone for some time.

I have a screen shot of the profiler, though I don't see a way to attached files here, so email me if you want to see the screenshot: dmoshal at gmail dot com.

@indexzero
Owner

I have fixed this issue in the v0.4.0 branch. Unfortunately, it is not stable due to a few bugs in the node.js http.Agent APIs that I am working to resolve.

We will push v0.4.0 when we can get these issues resolved in core; hopefully the aforementioned fixes will make it into 0.4.3

@davidmoshal
This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.