Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there a limit of files to scan, or a limit of errors to output (screen or xml) ? #63

Closed
danielppereira opened this issue Aug 11, 2014 · 20 comments

Comments

@danielppereira
Copy link

Is there a limit of files to grunt-scss-lint scan and run the task, or number of erros to output?

If i execute the command outside grunt, with the SCSS-Lint, all .scss are linted. So, there's no limit of files to the tool.

> scss-lint ../../pagamento/webroot/sass/

But running my grunt-scss-lint task:

My grunt scss-lint task:

    options: {
        config: '<%= app.config %>.scss-lint.yml',
        reporterOutput: '<%= app.config %>scsslint/report/scsslint_junit.xml',
        force: true
    },
    files: [{
        src: ['<%= app.src %>webroot/sass/checkout3/**/*.scss']
    }]

it breaks like this:

Running "scsslint:all" (scsslint) task
Running scss-lint on all
>> scss-lint failed with error code: undefined
>> and the following message:Error: stdout maxBuffer exceeded.
Warning: Task "scsslint:all" failed. Use --force to continue.

Aborted due to warnings.

if i change the src file line, getting less files, by being more specific changing the '/**/' to '/core/', the task doesn't break.

files: [{
    src: ['<%= app.src %>webroot/sass/checkout3/core/*.scss']
}]

If the limitation is the quantity of errors outputed in screen, maybe when the 'reporterOutput' option is in use, writing the errors in a xml file, this limit should be ignored.

using:

Ubuntu 12.04.4 LTS
scss-lint (0.26.2)
grunt-scss-lint": "^0.3.2"
@ahmednuaman
Copy link
Owner

I guess it's probably this: https://github.com/ahmednuaman/grunt-scss-lint/blob/master/tasks/lib/scss-lint.js#L153

How about a pull request?

@danielppereira
Copy link
Author

Sorry, i tried but i don't have enough knowledge.
Some guys fixed it in their plugin, as you can see here:

re1ro/grunt-bg-shell#4

@ahmednuaman
Copy link
Owner

Sure, I'll have a go now.

@ahmednuaman
Copy link
Owner

All fixed and pushed on v0.3.3

@isobar-ranesco
Copy link

i'm getting this same issue, originally it was due to amount of errors however once i fixed up those errors it seems to NOT validate my files. FYI: I have a total of 30 sass files and the following is the grunt call:

    'scsslint': {
        allFiles: [
            '<%= SOURCE_PATH %>/sass/components/*.scss',
            '<%= SOURCE_PATH %>/sass/modules/*.scss'
        ],
        options: {
            bundleExec: false,
            config: 'source/.scss-lint.yml',
            reporterOutput: null,
            force: false,
            exclude: ['<%= SOURCE_PATH %>/bower_components/**/**/*.scss']
        }
    },

I intentionally added in error on a few files in both directories, and none of them get picked by the linter, however if i use one path it works.

gem: scss-lint v0.30.0
npm: grunt-scss-lint v0.3.3

@willemdewit
Copy link

I had the same problem and I debugged a little bit.
When I add a console.log(err); on line 169 of tasks/lib/scss-lint.js I get the following output:

Running scss-lint on files
{ [Error: Command failed: The command line is too long.
] killed: false, code: 1, signal: null }
>> 145 files are lint free

So it seems that all the joined array of files to lint is to big.
It is very confusing that it is saying that all the files are lint free, while it actually is crashing.

@Ghostavio
Copy link

This shouldn't be closed as the bug is still there.

@ahmednuaman
Copy link
Owner

Sad times, anyone want to have a go at fixing it?

@ahmednuaman ahmednuaman reopened this Feb 23, 2015
@tbremer
Copy link

tbremer commented Feb 23, 2015

I can take a look at this sometime this week. I'd like to sooner, but.. work and all.

I have a massive code base I can check against, so I should be able to reproduce pretty quickly.

@tbremer
Copy link

tbremer commented Feb 24, 2015

So, I see that we have a couple of options off hand:

  1. Close the issue and have the users update their maxBuffer (In my codebase it had to be: 3000 * 1024) to succeed.
    • Not a super great option in terms of usability because we are forcing users to extend their CPU usage.
    • However, it's maintainable and removes us from the heavy lifting!
  2. Create a check where we look for the length of results against the maxBuffer and then break the results down and write the write to the log / XML file
    • Will add a fair amount of code.
    • Could produce more bugs / issues later down the line
    • Could confuse users if they set a buffer and then see a faked "stream" coming through
      • this can be worked around by printing / writing files carefully though.
  3. Take a hint from @jshint and if our results are over a certain length Print them all exit w/o any additional processing
    • this would be a breaking change
    • we would have to figure out something for --force
  4. Look into moving away from Spawn and into Fork
    • This would allow a new instance of the V8 engine and the results come back as a stream that we can do with what we will.

I think any of these options are viable, we just need to pick one.

Thoughts?

@ahmednuaman
Copy link
Owner

Interesting points. I'm going to have a look at some node libraries to see if they can help with this problem. I'm looking to refactor this library and move to a more modular structure (eg how the XML is created).

@tbremer
Copy link

tbremer commented Feb 27, 2015

The more I think about it the more I like the 3rd option the best, personally.

But I am also a huge fan of a rewrite / update.

@davidjbradshaw
Copy link

Just also ran into this issue and I like the third option as well.

@tbremer
Copy link

tbremer commented Mar 23, 2015

@davidjbradshaw You can always change the maxBuffer option to something really high if you want to bypass the issue.

@davidjbradshaw
Copy link

Thanks just set it to 300000000 to get around the issue, project I just
joined has 21,000 errors!!!

On Mon, Mar 23, 2015 at 2:42 PM, Tom Bremer notifications@github.com
wrote:

@davidjbradshaw https://github.com/davidjbradshaw You can always change
the maxBuffer option to something really high if you want to bypass the
issue.


Reply to this email directly or view it on GitHub
#63 (comment)
.

David J. Bradshaw )'( dave@bradshaw.net

@ahmednuaman
Copy link
Owner

😮

@QueueHammer
Copy link

As a programmer, this thread makes me sad 😭

@ahmednuaman
Copy link
Owner

So why not make a PR? ;)

@StephanBijzitter
Copy link

Yeah, this is pretty damn stupid honestly.

@ghost
Copy link

ghost commented Oct 10, 2016

Problem persists for me. Any solution in the works? I would if I could, but do not have the skills.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants