New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Large preprocessor outputs hit the maxBuffer limit #87
Conversation
Thanks for the request. Can you provide a regression test for child_process.spawn and resubmit? |
I'm not too sure how to do that, do I make a file like |
Yes, exactly. The urlRelative test is written using Mocha and Chai. The others are written in NodeUnit. You can use whichever you find more comfortable. If reading the contents of a large file directly still reproduces the error that you fixed then I would say it's preferable to skip the pre-processor command. Ideally we want the fewest/simplest conditions to reproduce the error. |
Let me know if adding the tests here was fine, or if they should be in a separate pull request, to test with unfixed code first. |
Here is fine. I trust that you watched the test fail and then made it pass. :) Although, now that I understand the nature of the bug, I'm wondering if it's worth including this test. I'm not sure I want to add a 200M stub to the repo just to reproduce, but without it the test is at best useless. Let's merge the branch without the test. I think a better idea for the future is to hook a linter into the test suite to check the code for |
Sounds good. Yea the warning should be something like:
|
Large preprocessor outputs hit the maxBuffer limit
Thanks again for the patch. It'll be in the next maintenance release |
When using one main file that imports all my CSS with a preprocessor I was hitting the maxBuffer limit. I've changed
exec
tospawn
so that the output gets built from the stream, without a limit.Also fixed a typo in the regex that checks for Sass partials.