Skip to content
This repository has been archived by the owner on Mar 23, 2024. It is now read-only.

Eliminate total error count #238

Closed
ChrisTorng opened this issue Feb 9, 2014 · 11 comments
Closed

Eliminate total error count #238

ChrisTorng opened this issue Feb 9, 2014 · 11 comments

Comments

@ChrisTorng
Copy link

Bigger js may generate too many errors. Hope to have a limit, even better to have a config.

ChrisTorng referenced this issue in ChrisTorng/WebEssentials2013 Feb 9, 2014
Combine JsHint and JSCS rule and cleanup excludeFiles. Hack to stop <xml
and more than 500 errors. Revert Regex.
@jzaefferer
Copy link
Contributor

Could you provide some more details? What exactly is the problem you'd like to see addressed? What would a limit do, what should the limit configuration look like?

@ChrisTorng
Copy link
Author

JSJint has maxerr http://jshint.com/docs/options/#maxerr, just hope to work as it. I think this is a common feature, for any linters... Don't understand what you don't understand?

@jzaefferer
Copy link
Contributor

maxerr ala jshint seems reasonable. Thanks for the link.

@mikesherov mikesherov added this to the 1.6 milestone Jun 24, 2014
@mikesherov mikesherov removed the cli label Jun 24, 2014
@mikesherov mikesherov modified the milestone: 1.6 Jun 24, 2014
@martinambrus
Copy link

+1 to this, I ran out of memory on a 2GB Continuous Integration Ubuntu machine while accidentally scanning external libraries (jquery and the such), which kept killing the machine completely

@mikesherov mikesherov modified the milestones: 1.6, 1.7 Aug 29, 2014
@mrjoelkemp
Copy link
Member

@mikesherov Would this be supported via a cli option or via the config? I'd think the former to avoid complicating config parsing.

@jzaefferer
Copy link
Contributor

Whatever it is, there should be a default for this to be useful. Jshint has 50.

@mikesherov
Copy link
Contributor

Default 50, supplied by either config or command line.

@mrjoelkemp mrjoelkemp self-assigned this Sep 28, 2014
@mrjoelkemp
Copy link
Member

Some thoughts after digging into this:

  1. A lib/errors object could have a clause in add that prevents a new rule from being added if the maxErrors limit is hit. This would help the RAM issue by reducing the number of error objects kept in memory within _errorlist; though, I'm not entirely convinced that's the cause of the memory bloat.
  2. Checker should avoid checking the file against the next rule if the error limit has been hit. This will avoid further, unnecessary processing of the current file.
  3. checkString should (abruptly) return an empty list if the error limit has been hit. This should avoid esprima-parsing files unnecessarily once the limit is hit.

The only problem is that once we're in the execution context of rule.check(file, errors), it's not possible to halt processing without modifying rules. With (1), we can avoid additional errors from being added past the maxErrors limit, but we can't stop rule checking abruptly. Not being able to halt processing is only an issue for a processor-heavy rule checking a large file before hitting the maxErrors limit.

Any objections/thoughts on these points?

@mikesherov
Copy link
Contributor

Implementing 1 & 2 seems good for now. Let someone complain about 3 before we fix it. Mike Sherov

On Tue, Sep 30, 2014 at 11:17 PM, Joel Kemp notifications@github.com
wrote:

Some thoughts after digging into this:

  1. An Errors object could have a clause in add that prevents a new rule from being added if the maxErrors limit is hit. This would help the RAM issue by reducing the number of error objects kept in memory within _errorlist; though, I'm not entirely convinced that's the cause of the memory bloat.
  2. Checker should avoid checking the file against the next rule if the error limit has been hit. This will avoid further, unnecessary processing of the current file.
  3. checkString should (abruptly) return an empty list if the error limit has been hit. This should avoid esprima-parsing files unnecessarily once the limit is hit.
    The only problem is that once we're in the execution context of rule.check(file, errors), it's not possible to halt processing without modifying rules. With (1), we can avoid additional errors from being added past the maxErrors limit, but we can't stop rule checking abruptly. Not being able to halt processing is only an issue for a processor-heavy rule checking a large file before hitting the maxErrors limit.

Any objections/thoughts on these points?

Reply to this email directly or view it on GitHub:
#238 (comment)

@mikesherov
Copy link
Contributor

Actually, much simpler: keep a static running tally of errors, and when the max is reached, throw inside errors.add and catch the error outside. This takes care of 1,2,3. Mike Sherov

On Wed, Oct 1, 2014 at 7:12 AM, Mike Sherov mike.sherov@gmail.com wrote:

Implementing 1 & 2 seems good for now. Let someone complain about 3 before we fix it. Mike Sherov
On Tue, Sep 30, 2014 at 11:17 PM, Joel Kemp notifications@github.com
wrote:

Some thoughts after digging into this:

  1. An Errors object could have a clause in add that prevents a new rule from being added if the maxErrors limit is hit. This would help the RAM issue by reducing the number of error objects kept in memory within _errorlist; though, I'm not entirely convinced that's the cause of the memory bloat.
  2. Checker should avoid checking the file against the next rule if the error limit has been hit. This will avoid further, unnecessary processing of the current file.
  3. checkString should (abruptly) return an empty list if the error limit has been hit. This should avoid esprima-parsing files unnecessarily once the limit is hit.
    The only problem is that once we're in the execution context of rule.check(file, errors), it's not possible to halt processing without modifying rules. With (1), we can avoid additional errors from being added past the maxErrors limit, but we can't stop rule checking abruptly. Not being able to halt processing is only an issue for a processor-heavy rule checking a large file before hitting the maxErrors limit.

Any objections/thoughts on these points?

Reply to this email directly or view it on GitHub:
#238 (comment)

@mrjoelkemp
Copy link
Member

Great idea!
On Oct 1, 2014 7:36 AM, "Mike Sherov" notifications@github.com wrote:

Actually, much simpler: keep a static running tally of errors, and when
the max is reached, throw inside errors.add and catch the error
outside. This takes care of 1,2,3. Mike Sherov

On Wed, Oct 1, 2014 at 7:12 AM, Mike Sherov mike.sherov@gmail.com
wrote:

Implementing 1 & 2 seems good for now. Let someone complain about 3
before we fix it. Mike Sherov
On Tue, Sep 30, 2014 at 11:17 PM, Joel Kemp notifications@github.com
wrote:

Some thoughts after digging into this:

  1. An Errors object could have a clause in add that prevents a new
    rule from being added if the maxErrors limit is hit. This would help the
    RAM issue

    by reducing the number of error objects kept in memory within _errorlist;
    though, I'm not entirely convinced that's the cause of the memory bloat.
  2. Checker should avoid checking the file against the next rule if the
    error limit has been hit. This will avoid further, unnecessary processing
    of the current file.
  3. checkString should (abruptly) return an empty list if the error
    limit has been hit. This should avoid esprima-parsing files unnecessarily
    once the limit is hit.
    The only problem is that once we're in the execution context of
    rule.check(file, errors),
    it's not possible to halt processing without modifying rules. With (1), we
    can avoid additional errors from being added past the maxErrors limit, but
    we can't stop rule checking abruptly. Not being able to halt processing is
    only an issue for a processor-heavy rule checking a large file before
    hitting the maxErrors limit.

Any objections/thoughts on these points?

Reply to this email directly or view it on GitHub:
#238 (comment)


Reply to this email directly or view it on GitHub
#238 (comment).

mrjoelkemp added a commit to mrjoelkemp/node-jscs that referenced this issue Oct 2, 2014
mrjoelkemp added a commit to mrjoelkemp/node-jscs that referenced this issue Oct 2, 2014
mrjoelkemp added a commit to mrjoelkemp/node-jscs that referenced this issue Oct 2, 2014
mrjoelkemp added a commit to mrjoelkemp/node-jscs that referenced this issue Oct 2, 2014
mrjoelkemp added a commit to mrjoelkemp/node-jscs that referenced this issue Oct 2, 2014
mrjoelkemp added a commit to mrjoelkemp/node-jscs that referenced this issue Oct 6, 2014
mrjoelkemp added a commit to mrjoelkemp/node-jscs that referenced this issue Oct 6, 2014
mrjoelkemp added a commit to mrjoelkemp/node-jscs that referenced this issue Oct 6, 2014
mrjoelkemp added a commit to mrjoelkemp/node-jscs that referenced this issue Oct 6, 2014
mrjoelkemp added a commit to mrjoelkemp/node-jscs that referenced this issue Oct 6, 2014
mrjoelkemp added a commit to mrjoelkemp/node-jscs that referenced this issue Oct 6, 2014
mrjoelkemp added a commit to mrjoelkemp/node-jscs that referenced this issue Oct 6, 2014
mrjoelkemp added a commit to mrjoelkemp/node-jscs that referenced this issue Oct 7, 2014
mrjoelkemp added a commit to mrjoelkemp/node-jscs that referenced this issue Oct 7, 2014
mrjoelkemp added a commit to mrjoelkemp/node-jscs that referenced this issue Oct 7, 2014
mrjoelkemp added a commit to mrjoelkemp/node-jscs that referenced this issue Oct 7, 2014
mrjoelkemp added a commit to mrjoelkemp/node-jscs that referenced this issue Oct 8, 2014
mrjoelkemp added a commit to mrjoelkemp/node-jscs that referenced this issue Oct 8, 2014
mrjoelkemp added a commit to mrjoelkemp/node-jscs that referenced this issue Oct 8, 2014
mrjoelkemp added a commit to mrjoelkemp/node-jscs that referenced this issue Oct 8, 2014
mrjoelkemp added a commit to mrjoelkemp/node-jscs that referenced this issue Oct 8, 2014
mrjoelkemp added a commit to mrjoelkemp/node-jscs that referenced this issue Oct 8, 2014
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants