EventEmitter memory leak on node v0.3.5, with possible fix #21

Closed
thegreatape opened this Issue Jan 21, 2011 · 18 comments

Comments

Projects
None yet
4 participants
@thegreatape

I created a very simple http-proxy instance for some testing: https://gist.github.com/790342

Running on node v0.3.5, this immediately returns the warning:

(node) warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit.
Trace
at Pool.<anonymous> (events.js:101:17)
at Object.proxyRequest (/usr/local/lib/node/.npm/http-proxy/0.3.1/package/lib/node-http-proxy.js:185:7)
at Server.<anonymous> (/usr/local/lib/node/.npm/http-proxy/0.3.1/package/lib/node-http-proxy.js:91:13)
at Server.emit (events.js:45:17)
at HTTPParser.onIncoming (http.js:862:12)
at HTTPParser.onHeadersComplete (http.js:85:31)
at Socket.ondata (http.js:787:22)
at Socket._onReadable (net.js:623:27)
at IOWatcher.onReadable [as callback] (net.js:156:10)

when an http request is made. Under load, it appears this is indeed leaking memory. After about 5 minutes with Siege hitting the proxy with 400 concurrent requests, it dies with:
FATAL ERROR: CALL_AND_RETRY_2 Allocation failed - process out of memory

However, removing the (empty?) pool error handler on line 184 of node-http-proxy.js appears to fix this. Should this be set once upon pool creation, rather than when the pool is retrieved for each request?

@indexzero

This comment has been minimized.

Show comment Hide comment
@indexzero

indexzero Jan 21, 2011

Member

We've been seeing this memory leak in production but hadn't been able to track it down yet.

We're running node 0.3.1 (for at least minimal https support), so that would explain where we don't see it.

Thanks for the heads up and I'll investigate and get back to you.

Member

indexzero commented Jan 21, 2011

We've been seeing this memory leak in production but hadn't been able to track it down yet.

We're running node 0.3.1 (for at least minimal https support), so that would explain where we don't see it.

Thanks for the heads up and I'll investigate and get back to you.

@thegreatape

This comment has been minimized.

Show comment Hide comment
@thegreatape

thegreatape Jan 21, 2011

Interesting. I ran the same tests using node v0.3.1, with the same results - with the pool error handler in place, memory leaks occur. Without the error handler, it runs like champ under load. :-)

Interesting. I ran the same tests using node v0.3.1, with the same results - with the pool error handler in place, memory leaks occur. Without the error handler, it runs like champ under load. :-)

@indexzero

This comment has been minimized.

Show comment Hide comment
@indexzero

indexzero Jan 21, 2011

Member

Nice. Will try siege myself, I'd only been using ab up till now, but this looks wayyy better.

Thanks again.

Member

indexzero commented Jan 21, 2011

Nice. Will try siege myself, I'd only been using ab up till now, but this looks wayyy better.

Thanks again.

@joelklabo

This comment has been minimized.

Show comment Hide comment
@joelklabo

joelklabo Feb 1, 2011

I'm having the same issue on 0.3.7, did anyone solve this?

I'm having the same issue on 0.3.7, did anyone solve this?

@indexzero

This comment has been minimized.

Show comment Hide comment
@indexzero

indexzero Feb 1, 2011

Member

All,

We will be dropping pool when we upgrade to 0.4.x in the next month or so because node.js > 0.3.6 does connection pooling for us.

In the meantime, let me know how the removal of the pool error handler works in prod. I don't think it's the cause and not handling it can cause uncaught exceptions under high load.

Member

indexzero commented Feb 1, 2011

All,

We will be dropping pool when we upgrade to 0.4.x in the next month or so because node.js > 0.3.6 does connection pooling for us.

In the meantime, let me know how the removal of the pool error handler works in prod. I don't think it's the cause and not handling it can cause uncaught exceptions under high load.

@davidmoshal

This comment has been minimized.

Show comment Hide comment
@davidmoshal

davidmoshal Mar 4, 2011

Hi, what's the status of this bug?.

I'm getting this error, on node 0.4:

(node) warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit.
Trace:
at Pool. (events.js:101:17)
at Object.proxyRequest (/usr/local/lib/node/.npm/http-proxy/0.3.1/package/lib/node-http-proxy.js:185:7)
at Server. (/Users/davem/Petromatch/repository/eclipse/petromatch-java/node/faye-server.js:12:3)
at Server. (/usr/local/lib/node/.npm/faye/0.5.5/package/faye-node.js:1952:22)
at Server.emit (events.js:45:17)
at HTTPParser.onIncoming (http.js:1078:12)
at HTTPParser.onHeadersComplete (http.js:87:31)
at Socket.ondata (http.js:977:22)
at Socket._onReadable (net.js:654:27)
at IOWatcher.onReadable as callback

Hi, what's the status of this bug?.

I'm getting this error, on node 0.4:

(node) warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit.
Trace:
at Pool. (events.js:101:17)
at Object.proxyRequest (/usr/local/lib/node/.npm/http-proxy/0.3.1/package/lib/node-http-proxy.js:185:7)
at Server. (/Users/davem/Petromatch/repository/eclipse/petromatch-java/node/faye-server.js:12:3)
at Server. (/usr/local/lib/node/.npm/faye/0.5.5/package/faye-node.js:1952:22)
at Server.emit (events.js:45:17)
at HTTPParser.onIncoming (http.js:1078:12)
at HTTPParser.onHeadersComplete (http.js:87:31)
at Socket.ondata (http.js:977:22)
at Socket._onReadable (net.js:654:27)
at IOWatcher.onReadable as callback

@davidmoshal

This comment has been minimized.

Show comment Hide comment
@davidmoshal

davidmoshal Mar 4, 2011

oh, and node is definitely leaking.

oh, and node is definitely leaking.

@indexzero

This comment has been minimized.

Show comment Hide comment
@indexzero

indexzero Mar 4, 2011

Member

Working on a update for node 0.4.0. A lot of this has to do with the pool dependency which we can remove using the http.Agent API in 0.4.0

Member

indexzero commented Mar 4, 2011

Working on a update for node 0.4.0. A lot of this has to do with the pool dependency which we can remove using the http.Agent API in 0.4.0

@davidmoshal

This comment has been minimized.

Show comment Hide comment
@davidmoshal

davidmoshal Mar 4, 2011

thanks, rough eta?
node now at 760 Mb after 6 hours of very light dev testing (couple of connections)
ie: I wouldn't feel comfortable moving to production.

thanks, rough eta?
node now at 760 Mb after 6 hours of very light dev testing (couple of connections)
ie: I wouldn't feel comfortable moving to production.

@davidmoshal

This comment has been minimized.

Show comment Hide comment
@davidmoshal

davidmoshal Mar 4, 2011

ps: I also posted this on the node.js google group, and had some interesting replies:

https://groups.google.com/d/topic/nodejs/TbHMjwMdRRY/discussion

ps: I also posted this on the node.js google group, and had some interesting replies:

https://groups.google.com/d/topic/nodejs/TbHMjwMdRRY/discussion

@davidmoshal

This comment has been minimized.

Show comment Hide comment
@davidmoshal

davidmoshal Mar 4, 2011

note the following comment regarding the code I posted, which is based on the example in the node.js readme:

https://groups.google.com/d/msg/nodejs/TbHMjwMdRRY/eoDRenX0UCEJ

is it true that every request creates a new proxy which isn't garbage collected?

note the following comment regarding the code I posted, which is based on the example in the node.js readme:

https://groups.google.com/d/msg/nodejs/TbHMjwMdRRY/eoDRenX0UCEJ

is it true that every request creates a new proxy which isn't garbage collected?

@indexzero

This comment has been minimized.

Show comment Hide comment
@indexzero

indexzero Mar 4, 2011

Member

It is true that every request creates a new proxy. It is false that it is not garbage collected.

Member

indexzero commented Mar 4, 2011

It is true that every request creates a new proxy. It is false that it is not garbage collected.

@indexzero

This comment has been minimized.

Show comment Hide comment
@indexzero

indexzero Mar 4, 2011

Member

I replied to the thread here: https://groups.google.com/forum/#!msg/nodejs/TbHMjwMdRRY/eoDRenX0UCEJ

Will try to bang this out in the next week or so.

Member

indexzero commented Mar 4, 2011

I replied to the thread here: https://groups.google.com/forum/#!msg/nodejs/TbHMjwMdRRY/eoDRenX0UCEJ

Will try to bang this out in the next week or so.

@davidmoshal

This comment has been minimized.

Show comment Hide comment
@davidmoshal

davidmoshal Mar 4, 2011

I replaced this (which I found on your site):

var server = http.createServer (function (req, res) {

new httpProxy
.HttpProxy (req, res)
.proxyRequest (8080, 'localhost', req, res)

})

with:

var server = httpProxy.createServer (8080, 'localhost')

Functionality is the same, and the leak does seem to be less, but will know later today.

I replaced this (which I found on your site):

var server = http.createServer (function (req, res) {

new httpProxy
.HttpProxy (req, res)
.proxyRequest (8080, 'localhost', req, res)

})

with:

var server = httpProxy.createServer (8080, 'localhost')

Functionality is the same, and the leak does seem to be less, but will know later today.

@davidmoshal

This comment has been minimized.

Show comment Hide comment
@davidmoshal

This comment has been minimized.

Show comment Hide comment
@davidmoshal

davidmoshal Mar 5, 2011

I ran node-inspector on my code, with v8-profiling.
As far as I can tell, HttpProxy is not being GC'd

Every request increments HttpProxy, ClientRequest and ServerResponse counts.
These counts never decrease, even if the system is left alone for some time.

I have a screen shot of the profiler, though I don't see a way to attached files here, so email me if you want to see the screenshot: dmoshal at gmail dot com.

I ran node-inspector on my code, with v8-profiling.
As far as I can tell, HttpProxy is not being GC'd

Every request increments HttpProxy, ClientRequest and ServerResponse counts.
These counts never decrease, even if the system is left alone for some time.

I have a screen shot of the profiler, though I don't see a way to attached files here, so email me if you want to see the screenshot: dmoshal at gmail dot com.

@indexzero

This comment has been minimized.

Show comment Hide comment
@indexzero

indexzero Mar 11, 2011

Member

I have fixed this issue in the v0.4.0 branch. Unfortunately, it is not stable due to a few bugs in the node.js http.Agent APIs that I am working to resolve.

We will push v0.4.0 when we can get these issues resolved in core; hopefully the aforementioned fixes will make it into 0.4.3

Member

indexzero commented Mar 11, 2011

I have fixed this issue in the v0.4.0 branch. Unfortunately, it is not stable due to a few bugs in the node.js http.Agent APIs that I am working to resolve.

We will push v0.4.0 when we can get these issues resolved in core; hopefully the aforementioned fixes will make it into 0.4.3

@davidmoshal

This comment has been minimized.

Show comment Hide comment
@davidmoshal

davidmoshal Mar 11, 2011

Thanks, I'm using nginx for now (without websockets)

Dave

On Thu, Mar 10, 2011 at 7:49 PM, indexzero <
reply@reply.github.com>wrote:

I have fixed this issue in the v0.4.0 branch. Unfortunately, it is not
stable due to a few bugs in the node.js http.Agent APIs that I am working to
resolve.

We will push v0.4.0 when we can get these issues resolved in core;
hopefully the aforementioned fixes will make it into 0.4.3

#21 (comment)

Thanks, I'm using nginx for now (without websockets)

Dave

On Thu, Mar 10, 2011 at 7:49 PM, indexzero <
reply@reply.github.com>wrote:

I have fixed this issue in the v0.4.0 branch. Unfortunately, it is not
stable due to a few bugs in the node.js http.Agent APIs that I am working to
resolve.

We will push v0.4.0 when we can get these issues resolved in core;
hopefully the aforementioned fixes will make it into 0.4.3

#21 (comment)

This issue was closed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment