Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Meteor producing a large amount of TIME_WAIT TCP connections on windows #4125

xpressivecode opened this issue Apr 4, 2015 · 4 comments


Copy link

After successfully running meteor, any subsequent attempts to restart meteor (including meteor itself reacting to code changes etc.) will result in it crashing:

Error: Couldn't run netstat -ano: {}

Running this command manually, I see an indefinite list of TIME_WAIT TCP connections. So it's not that the command couldn't run, it's more likely that the child process times out?

Waiting X amount of seconds (or minutes) until the list clears allows you to continue. However, this obviously impacts dev time and receiving rapid feedback while making changes.

Meteor version:
OS: Windows 8 Pro (64bit)

Here is the stack trace:

Error: Couldn't run netstat -ano: {}
    at Object.Future.wait (C:\Users\Mike\AppData\Local\.meteor\packages\meteor-t
    at findMongoPids (C:\Users\Mike\AppData\Local\.meteor\packages\meteor-tool\1
    at findMongoAndKillItDead (C:\Users\Mike\AppData\Local\.meteor\packages\mete
    at launchOneMongoAndWaitForReadyForInitiate (C:\Users\Mike\AppData\Local\.me
    at launchMongo (C:\Users\Mike\AppData\Local\.meteor\packages\meteor-tool\1.1
    at [object Object]._.extend._startOrRestart (C:\Users\Mike\AppData\Local\.me
    at [object Object]._.extend.start (C:\Users\Mike\AppData\Local\.meteor\packa
    at C:\Users\Mike\AppData\Local\.meteor\packages\meteor-tool\1.1.2\mt-os.wind
    - - - - -
    at C:\Users\Mike\AppData\Local\.meteor\packages\meteor-tool\1.1.2\mt-os.wind
    at ChildProcess.exithandler (child_process.js:654:7)
    at ChildProcess.emit (events.js:98:17)
    at maybeClose (child_process.js:766:16)
    at Process.ChildProcess._handle.onexit (child_process.js:833:5)
Copy link


I also got this error today. After few tries, it worked fine. Not sure the reason for error.

Copy link

Lamarius commented Apr 6, 2015

I had this issue today as well. For me, netstat -ano was exceeding the max buffer size of stdout, which has a default of 200 KB. I increased the size and it started working again.

In run-mongo.js (for me, it was in appdata, ...\AppData\Local\.meteor\packages\meteor-tool\1.1.2\\tools\run-mongo.js line 87) change the following line:

child_process.exec('netstat -ano', function (error, stdout, stderr) {

to this:

child_process.exec('netstat-ano', {maxBuffer: 1024 * 1024}, function (error, stdout, stderr) {

Note that the max buffer may not have to be that large, but I didn't want to mess around.

Copy link

I believe this is the same issue I'm facing. However, even when setting the buffer to 1024*1024, I still get the error stdout maxBuffer exceeded. Setting it to something absurdly high fixes the issue though.

child_process.exec('netstat -ano', { maxBuffer: 9024 * 9024 }, function (error, stdout, stderr){

If I run the command manually, it takes approx. 26 seconds for it to come back. Most of it is something that looks like:

 TCP        TIME_WAIT       0
 TCP        TIME_WAIT       0
 TCP        TIME_WAIT       0
 TCP        TIME_WAIT       0
 TCP        TIME_WAIT       0
 TCP        TIME_WAIT       0
 TCP        TIME_WAIT       0
 TCP        TIME_WAIT       0
 TCP        TIME_WAIT       0
 TCP        TIME_WAIT       0
 TCP        TIME_WAIT       0
 TCP        TIME_WAIT       0
 TCP        TIME_WAIT       0
 TCP        TIME_WAIT       0
 TCP        TIME_WAIT       0
 TCP        TIME_WAIT       0
 TCP        TIME_WAIT       0

I have no idea if this in the intended behaviour? But at least we have a work around for the moment. Thanks @Lamarius

Copy link

glasser commented Apr 14, 2015

Hmm, we made a similar fix for Unix (#2158), we should probably just boost this buffer too.

It's a little weird that these sockets are all being leaked though (TIME_WAIT means waiting for the other side to properly close the socket I think).

@Slava Slava mentioned this issue May 19, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet

No branches or pull requests

5 participants