Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Many stale connections in state CLOSE_WAIT and FIN_WAIT2 #1040

Closed
njam opened this issue Sep 27, 2012 · 13 comments
Closed

Many stale connections in state CLOSE_WAIT and FIN_WAIT2 #1040

njam opened this issue Sep 27, 2012 · 13 comments

Comments

@njam
Copy link

njam commented Sep 27, 2012

I'm running a pretty basic Socket.IO application which just relays messages from a redis pub/sub queue to clients.
The application uses the current Socket.IO 0.9.10 on Node.js 0.8.10, and all transports are enabled.

There are ~4000 simultaneous connections, but after some hours the server has 4 times that many TCP connections. These connections seems to be stale and in the state CLOSE_WAIT or FIN_WAIT2.
The number of these un-dead connections grows linearly with time, and results in high memory usage and load.

Cacti Graph

As far as I could find out via google this is a result of clients not correctly closing connections. It is my understanding that the application (Socket.IO) should force-close these connections after some timeout. Is that correct? Is there a bug in Socket.IO?
Any ideas to further debug?

Thanks

@taktran
Copy link

taktran commented Oct 11, 2012

I have the same issue, and from some digging around, it seems it's a v8 engine bug (affects other libraries other than socket.io) - #1015 (comment)

"I've been testing my current setup using 4GB of ram and I can only get to about 200-300 users before memory is sucked up in a few hours. I'm really not doing much other than some redis pubsub and relaying messages." - #1015 (comment)

Suggested solution is to disable websockets, or use node version 0.4.12 (which may break other things). Or I guess, restart server every so often (use monit?).

Also see:
websockets/ws#43
#463

@njam
Copy link
Author

njam commented Oct 11, 2012

Thanks for the info.
I'm now using SockJS which doesn't has this problem. Using the same Node version.

@konklone
Copy link

I was just wrestling with what I think is this issue, on my own socket.io/redis app, on Nodejitsu. It definitely seems like it's socket.io's Redis store. I'm going to follow the OP's lead and switch to SockJS.

Chat logs of me working through it in the #nodejitsu support channel:
https://gist.github.com/4146668

@njam
Copy link
Author

njam commented Nov 26, 2012

@konklone: Just to let you know, we didn't use Socket.IO's Redisstore.

@theyak
Copy link

theyak commented Jun 14, 2013

Issue still occurring in engine.io with Node v0.10. I made the simplest program possible. Just connect and client sends data to server every once in awhile and vice-versa.

@patrickod
Copy link

I too was seeing this issue with v0.10 using the Redis store. Eventually moved away from socket.io to Faye instead which has proven to be much much more reliable in production.

@nategood
Copy link

Experiencing the same thing with socket.io on Node v0.8. End up with a thousand or so of FIN_WAIT2 connections before memory maxes out.

@toblerpwn
Copy link

@nategood (and others): we've had success resolving these sorts of issues in MemoryStore implementations (aka default implementations) using the latest Socket.io (v0.9.16?) (and Node.js in our case - v0.10.xx). I think the solution on 0.9.14 might have been the fix.

Using RedisStore still has this issue, however, even with the latest Socket.io; we're going to try deploying the solution from #1303 in the next few days and see what's what.

@SupremeTechnopriest
Copy link

Any updates on this? RedisSessionStore still leaking all over the place.

@netmikey
Copy link

netmikey commented Mar 8, 2014

+1 on this one, we're also seeing it on our chat application (nodejs v0.10.22 / socket.io 0.9.16)

@Nibbler999
Copy link

FIN_WAIT2 issue can be avoided by using https://github.com/soplwang/node-ka-patch

@netmikey
Copy link

Awesome! Looks like https://github.com/soplwang/node-ka-patch might be a working workaround (mind the wordplay! ;)) I built it into our node.js servers and TCP sockets seem to behave as expected now.

@netmikey
Copy link

Unfortunately, I have to revise my previous post: Although FIN_WAIT2 connections are kept at a reasonably low rate now, it's the CLOSE_WAIT connections that build up and are using up all file handles. The application has been running for about 2 weeks now and I'm getting:

$ netstat -an | grep $NODEPORT | grep CLOSE_WAIT | wc -l
473

This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants