Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Meteor producing a large amount of TIME_WAIT TCP connections on windows #4125

Closed
xpressivecode opened this issue Apr 4, 2015 · 4 comments
Closed

Comments

@xpressivecode
Copy link

After successfully running meteor, any subsequent attempts to restart meteor (including meteor itself reacting to code changes etc.) will result in it crashing:

Error: Couldn't run netstat -ano: {}

Running this command manually, I see an indefinite list of TIME_WAIT TCP connections. So it's not that the command couldn't run, it's more likely that the child process times out?

Waiting X amount of seconds (or minutes) until the list clears allows you to continue. However, this obviously impacts dev time and receiving rapid feedback while making changes.

Meteor version: 1.1.0.1
OS: Windows 8 Pro (64bit)

Here is the stack trace:

C:\Users\Mike\AppData\Local\.meteor\packages\meteor-tool\1.1.2\mt-os.windows.x86
_32\dev_bundle\lib\node_modules\fibers\future.js:278
                                                throw(ex);
                                                      ^
Error: Couldn't run netstat -ano: {}
    at Object.Future.wait (C:\Users\Mike\AppData\Local\.meteor\packages\meteor-t
ool\1.1.2\mt-os.windows.x86_32\dev_bundle\lib\node_modules\fibers\future.js:398:
15)
    at findMongoPids (C:\Users\Mike\AppData\Local\.meteor\packages\meteor-tool\1
.1.2\mt-os.windows.x86_32\tools\run-mongo.js:120:16)
    at findMongoAndKillItDead (C:\Users\Mike\AppData\Local\.meteor\packages\mete
or-tool\1.1.2\mt-os.windows.x86_32\tools\run-mongo.js:254:14)
    at launchOneMongoAndWaitForReadyForInitiate (C:\Users\Mike\AppData\Local\.me
teor\packages\meteor-tool\1.1.2\mt-os.windows.x86_32\tools\run-mongo.js:378:7)
    at launchMongo (C:\Users\Mike\AppData\Local\.meteor\packages\meteor-tool\1.1
.2\mt-os.windows.x86_32\tools\run-mongo.js:630:7)
    at [object Object]._.extend._startOrRestart (C:\Users\Mike\AppData\Local\.me
teor\packages\meteor-tool\1.1.2\mt-os.windows.x86_32\tools\run-mongo.js:730:19)
    at [object Object]._.extend.start (C:\Users\Mike\AppData\Local\.meteor\packa
ges\meteor-tool\1.1.2\mt-os.windows.x86_32\tools\run-mongo.js:688:10)
    at C:\Users\Mike\AppData\Local\.meteor\packages\meteor-tool\1.1.2\mt-os.wind
ows.x86_32\tools\run-all.js:199:26
    - - - - -
    at C:\Users\Mike\AppData\Local\.meteor\packages\meteor-tool\1.1.2\mt-os.wind
ows.x86_32\tools\run-mongo.js:89:28
    at ChildProcess.exithandler (child_process.js:654:7)
    at ChildProcess.emit (events.js:98:17)
    at maybeClose (child_process.js:766:16)
    at Process.ChildProcess._handle.onexit (child_process.js:833:5)
@gauravdhiman
Copy link

+1

I also got this error today. After few tries, it worked fine. Not sure the reason for error.

@Lamarius
Copy link

Lamarius commented Apr 6, 2015

I had this issue today as well. For me, netstat -ano was exceeding the max buffer size of stdout, which has a default of 200 KB. I increased the size and it started working again.

In run-mongo.js (for me, it was in appdata, ...\AppData\Local\.meteor\packages\meteor-tool\1.1.2\mt-os.windows.x86_32\tools\run-mongo.js line 87) change the following line:

child_process.exec('netstat -ano', function (error, stdout, stderr) {

to this:

child_process.exec('netstat-ano', {maxBuffer: 1024 * 1024}, function (error, stdout, stderr) {

Note that the max buffer may not have to be that large, but I didn't want to mess around.

@xpressivecode
Copy link
Author

I believe this is the same issue I'm facing. However, even when setting the buffer to 1024*1024, I still get the error stdout maxBuffer exceeded. Setting it to something absurdly high fixes the issue though.

child_process.exec('netstat -ano', { maxBuffer: 9024 * 9024 }, function (error, stdout, stderr){
...
});

If I run the command manually, it takes approx. 26 seconds for it to come back. Most of it is something that looks like:

 TCP    127.0.0.1:65470        127.0.0.1:21813        TIME_WAIT       0
 TCP    127.0.0.1:65471        127.0.0.1:21813        TIME_WAIT       0
 TCP    127.0.0.1:65472        127.0.0.1:21813        TIME_WAIT       0
 TCP    127.0.0.1:65473        127.0.0.1:21813        TIME_WAIT       0
 TCP    127.0.0.1:65474        127.0.0.1:21813        TIME_WAIT       0
 TCP    127.0.0.1:65475        127.0.0.1:21813        TIME_WAIT       0
 TCP    127.0.0.1:65476        127.0.0.1:21813        TIME_WAIT       0
 TCP    127.0.0.1:65477        127.0.0.1:21813        TIME_WAIT       0
 TCP    127.0.0.1:65478        127.0.0.1:21813        TIME_WAIT       0
 TCP    127.0.0.1:65479        127.0.0.1:21813        TIME_WAIT       0
 TCP    127.0.0.1:65480        127.0.0.1:21813        TIME_WAIT       0
 TCP    127.0.0.1:65481        127.0.0.1:21813        TIME_WAIT       0
 TCP    127.0.0.1:65482        127.0.0.1:21813        TIME_WAIT       0
 TCP    127.0.0.1:65483        127.0.0.1:21813        TIME_WAIT       0
 TCP    127.0.0.1:65484        127.0.0.1:21813        TIME_WAIT       0
 TCP    127.0.0.1:65485        127.0.0.1:21813        TIME_WAIT       0
 TCP    127.0.0.1:65486        127.0.0.1:21813        TIME_WAIT       0

I have no idea if this in the intended behaviour? But at least we have a work around for the moment. Thanks @Lamarius

@glasser
Copy link
Contributor

glasser commented Apr 14, 2015

Hmm, we made a similar fix for Unix (#2158), we should probably just boost this buffer too.

It's a little weird that these sockets are all being leaked though (TIME_WAIT means waiting for the other side to properly close the socket I think).

@Slava Slava mentioned this issue May 19, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants