Skip to content
This repository has been archived by the owner on Aug 7, 2023. It is now read-only.

Linter blocks on large files + high CPU/memory usage #755

Closed
rahatarmanahmed opened this issue Nov 17, 2016 · 11 comments
Closed

Linter blocks on large files + high CPU/memory usage #755

rahatarmanahmed opened this issue Nov 17, 2016 · 11 comments
Labels

Comments

@rahatarmanahmed
Copy link

rahatarmanahmed commented Nov 17, 2016

Issue Type

Bug

Issue Description

I've encountered a large file that stalls linter-eslint for around 10-15 minutes.

I've created a minimal reproduction of the issue (the code itself is obfuscated but the issue is still there).

Steps to reproduce w/ above files

  1. Download the whole gist
  2. cd into the gist folder
  3. atom .
  4. open test.js. write some code that has linting errors. i've enabled comma spacing so something like var abc = [1,2,3] should show errors. everything is working at this point
  5. open constants.js
  6. hit cmd+s to save to ensure the linter is triggered

At this point the issue occurs. Go back to test.js tab, edit the file to change the linter errors. you'll notice the linter doesn't update the errors. You'll also notice the Atom Helper process for eslint starts taking significantly large amounts of memory and CPU time, slowing down the main editor. If you take heap snapshots you'll find a large string. Subsequent snapshots will show that string increasing in size.
constants js downloads test slow-lint 2016-11-17 16-51-46

After like 10-15m everything returns to normal. This kind of slowdown isn't due to eslint, since the cli can lint the two files in less than a second.

Bug Checklist

  • Restart Atom
  • Verify the eslint CLI gives the proper result, while linter-eslint does not
  • Paste the output of the Linter Eslint: Debug command from the Command Palette below

The message doesn't show up until after the 10-15m it takes for the linter to run. I assume this means it's blocked the whole time

atom version: 1.12.4
linter-eslint version: 8.0.0
eslint version: 3.7.1
hours since last atom restart: 0.1
platform: darwin
Using bundled fallback eslint from /Users/rahmed3/.atom/packages/linter-eslint/node_modules/eslint
linter-eslint configuration: {
  "lintHtmlFiles": false,
  "useGlobalEslint": false,
  "showRuleIdInMessage": true,
  "disableWhenNoEslintConfig": true,
  "eslintrcPath": "",
  "globalNodePath": "",
  "advancedLocalNodeModules": "",
  "eslintRulesDir": "",
  "disableEslintIgnore": false,
  "disableFSCache": false,
  "fixOnSave": false,
  "scopes": [
    "source.js",
    "source.jsx",
    "source.js.jsx",
    "source.babel",
    "source.js-semantic"
  ],
  "rulesToSilenceWhileTyping": []
}
@rahatarmanahmed
Copy link
Author

rahatarmanahmed commented Dec 12, 2016

Did some digging into this. From what I can tell, eslint is generating ~170MB of lint error data from this file. The actual linting does indeed take less than 1 second on the worker child process. The slowness seems to come from the transfer of that 170MB from the worker to the main process. The giant string in the heap snapshot seems to confirm this suspicion, as it slowly approaches 170MB. Something either in the process-communication module or electron/node's child-process IPC is causing the transfer to take a very long time.

I measured how long the whole thing took once:

JSON.stringify(job.response).length:: 169663701 characters/bytes (not exactly bytes but close enough)
transfer time: 1173000 ms = 19.55 min
transfer speed: 144.64 KBps

Which is, uh, not right. IPC shouldn't be that slow. I'll try measuring how long node usually takes to transfer data by IPC

@rahatarmanahmed
Copy link
Author

rahatarmanahmed commented Dec 13, 2016

Possible cause: nodejs/node#3145
I've tested it out, and it seems that the time does grow O(n^2) with increasing process.send() message size (though I can't test with a 169663701 byte message, it runs out of memory before it finishes)

@IanVS
Copy link
Member

IanVS commented Jan 5, 2017

Great investigating. I've also noticed slow response times even for moderately large files with lots of linting errors.

I wonder if one possible "solution" here would be to truncate the linting errors after a reasonably high number. I can't see any real benefit to showing more than a few hundred linting errors in a single file. That probably just indicates that you need to add that file to your .eslintignore. :)

@Arcanemagus, what do you think about only showing the first few hundred (maybe 500?) linting errors? We could then throw up a notification that further linting messages have been suppressed. At least this way we won't lock up editors for a quarter of an hour. 🤦‍♂️

@rahatarmanahmed
Copy link
Author

Another option may be to apply some compression scheme on sufficiently large results before transferring the data between processes. Could be helpful if we find the time saved transferring less data greatly exceeds the time spent compressing/decompressing. A lot of the data is fairly repetitive, so it might work. A cursory google finds libraries like JSONH that do this, though there may be something better.

On the flip side, it's added complexity for a bug that should arguably be fixed in node core if possible.

@IanVS
Copy link
Member

IanVS commented Jan 5, 2017 via email

@rahatarmanahmed
Copy link
Author

Maybe. I'll try to investigate the issue in node core first, but failing that I can see if compression works.

@rahatarmanahmed
Copy link
Author

UPDATE: looks like someone has already submitted a PR to fix that issue in node: nodejs/node#10557. Once that gets to electron and atom updates, it should make this issue much less annoying. However, large enough lint results will still take several seconds, so we probably still need to truncate the results.

@IanVS
Copy link
Member

IanVS commented Jan 6, 2017

That's awesome, thanks for all the investigation work.

Judging by how long it took to get the last node bump in Atom, I have a feeling this may take quite some time to trickle down, and agree that it might make sense to take some mitigating actions within linter-eslint in the meantime. I'd like to hear what @Arcanemagus thinks, though.

@IanVS
Copy link
Member

IanVS commented Jan 30, 2017

It's looking like this will land in node 7.5.0.

@Arcanemagus
Copy link
Member

Arcanemagus commented May 19, 2017

The current Electron version in Atom master (v1.6.9) only includes Node v7.4.0, so unfortunately we are missing out on that fix... but it does include Web Workers, with which we could use Transferrable Objects to have virtually no overhead for sending the data between the worker and the main process.

It's looking like this is due to land around Atom v1.19.0 so beyond the compression idea thrown around above we are likely stuck waiting on that for any progress to be made here.

@UziTech UziTech added the stale label Mar 14, 2021
@UziTech
Copy link
Member

UziTech commented Mar 14, 2021

Is this still an issue?

@UziTech UziTech closed this as completed Oct 24, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

4 participants