Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

maxBuffer default too small #9829

Closed
mlynch opened this issue Nov 28, 2016 · 15 comments
Closed

maxBuffer default too small #9829

mlynch opened this issue Nov 28, 2016 · 15 comments
Labels
child_process Issues and PRs related to the child_process subsystem.

Comments

@mlynch
Copy link

mlynch commented Nov 28, 2016

  • Version: v6.2.1
  • Platform: Darwin me.local 15.5.0 Darwin Kernel Version 15.5.0: Tue Apr 19 18:36:36 PDT 2016; root:xnu-3248.50.21~8/RELEASE_X86_64 x86_64
  • Subsystem: child_process

Currently maxBuffer for child_process.exec is set to 200*1024 bytes, or ~204.8KB. I ran into an issue where my child process was being terminated and tracking it down was quite tough. It ended up being that it was producing enough output that it exceeded maxBuffer.

I think the buffer size is too small and this behavior (terminating a child) is drastic enough that it should only be done in the case where a child is producing a much larger amount of output.

I'm not sure what's sane here, perhaps 5MB+?

@mscdex mscdex added the child_process Issues and PRs related to the child_process subsystem. label Nov 28, 2016
@sindresorhus
Copy link

I would go with 10 MB. (Which is what I'm using as default in execa)

@sam-github
Copy link
Contributor

How about we make maxBuffer mandatory?

@sindresorhus
Copy link

@sam-github That sounds like an extremely breaking change and would be very inconvenient. Just set it to an arbitrary high value so most users won't hit it.

@sam-github
Copy link
Contributor

Yeah, bit of a strawman... but any value is likely to be smaller than someone needs.

@sindresorhus
Copy link

@sam-github Yes, but 200 KB is IMHO way too low. Anything higher will at least improve the situation.

@bnoordhuis
Copy link
Member

The flip side is that the memory footprint for a program that spawns twenty simultaneous child processes goes up from max 4 MB to max 200 MB. That's a pretty steep increase.

I infer that you use maxBuffer as a convenient way to collect output but it's intended to be used as a circuit breaker for runaway processes.

Also, I'm curious why you are using child_process.exec/execFile for programs that print a lot to stdout. You'd be better served by child_process.spawn(), it knows how to stream the data.

@sindresorhus
Copy link

The flip side is that the memory footprint for a program that spawns twenty simultaneous child processes goes up from max 4 MB to max 200 MB. That's a pretty steep increase.

That's only if they actually use all 10 MB each, right?

Honestly, even maxBuffer at 1 MB would be a huge improvement.

Also, I'm curious why you are using child_process.exec/execFile for programs that print a lot to stdout. You'd be better served by child_process.spawn(), it knows how to stream the data.

.spawn() is inconvenient to use when you just want the output as a whole and do something with it, which is my most common use-case for child_process. Not very often, but once in a while, I have output that takes more than 200 KB, like here.

@mlynch
Copy link
Author

mlynch commented Nov 29, 2016

@bnoordhuis That is a good point, though it doesn't solve the core issue that developers hit this limit and their child processes terminate without much indication as to why.

Adding to what @sindresorhus said, if you are using exec to spawn a command and to view the output, you're more explicitly looking for a single process command so, in theory, you're not going to spawn a bunch of children and not monitor their output.

@sam-github
Copy link
Contributor

I'd be OK with upping to 1Meg, if someone PRs it, I've run into the limit, too (but I just set maxBuffer when I do). 10 Meg seems excessive, if you want that much output, explicitly configuring node to expect it is reasonable.

Or perhaps the limit should be max-string, and run-away processes become the user's problem to protect against?

@peterhal
Copy link

peterhal commented Mar 9, 2017

Having the maxBuffer at all is the bug. It gives users an option they don't want to have to specify.

Much better to remove the macBuffer option and have the buffer grow dynamically. Programs with small streams use small amounts of memory, programs that use large streams use more memory. Everything just works and the programmer doesn't need to know about the issue at all.

@bnoordhuis
Copy link
Member

@peterhal I take it you didn't read this comment?

it's intended to be used as a circuit breaker for runaway processes

seppevs added a commit to seppevs/node that referenced this issue Mar 27, 2017
Increase the default maxBuffer for child_process.exec
from 200 * 1024 bytes to 1024 KB so child processes
don't get terminated too soon.

This is a fix for nodejs#9829
@jaswrks
Copy link

jaswrks commented May 20, 2017

Having a maxBuffer option is enough that it allows a developer to create their own circuit breaker when they want one; i.e., a developer can set maxBuffer to a value they choose.

The problem: Node is trying to do what a developer should be doing. My humble opinion is that maxBuffer should default to Infinity and just be made available for developers to use as a circuit breaker when they feel its appropriate to do so.

Otherwise, as has been noted above, any default value for maxBuffer is likely to be too small in certain circumstances, resulting in unforeseen errors in various applications that didn't think to override whatever the default value was.

@Trott
Copy link
Member

Trott commented Aug 13, 2017

Is this a change we're likely to consider? Should this remain open?

@bnoordhuis
Copy link
Member

bnoordhuis commented Aug 14, 2017

There is an open but stalled pull request. Seeing there hasn't been much movement, I'll close this out. If someone wants to adopt #11196, feel free.

edit: I just closed the PR; didn't seem likely its author was going to pick it up again.

@vipulkshah
Copy link

can someone help me how to change the default buffer size. i have the same issue executing az status vm list skus

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
child_process Issues and PRs related to the child_process subsystem.
Projects
None yet
Development

No branches or pull requests

9 participants