-
Notifications
You must be signed in to change notification settings - Fork 12.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory #18411
Comments
@pocesar can you share your sources with us? we are open to do this by email, or sign any required NDAs. |
Can you also try |
just crashed with another stacktrace
|
The stack trace is inconsequential here. the process runs out of memory, it does not really matter where it crashes, what matters is why is it filling up memory. |
it will crash again in a couple of minutes, I'll see if anything changes |
so it does not crash without |
so far I've been able to build without watch, but when using
the memory is ever increasing though |
aah.. that indicates there is a memory leak for |
yes, it usually reaches a point where my machine slows to a crawl, and even vscode crashes (because I have 5 editors opened, all with TSC with watch, using nightly for all of them, even for vscode)
|
try this: |
the problem is that there is a leak, no matter how big it is set, the memory reaches the maximum set point then crashes using node 8.5.0 now, and 2.6.0-dev.20170913, fresh "tsc -p tsconfig.json -w"
|
Same error here . |
using 2.6.0-dev.20170916
|
Same error here, with |
I've encountered this reliably when trying to use the latest commit on facebook/immutable-js as a dependency. |
@icopp i think this is a different issue. can you try with typescript@next instead. |
still happening in 20170920
should I stop reporting the same behavior on nightlies until #17269 gets merged? |
yes please. |
just a heads up, it's really hard to have multiple watches on a project that is split in many shared codebases: in my scenario: I have two projects that are symlinked to the main projects (one for typings and other for shared ui stuff). every time there's a save, the memory increases until it crashes. although I have 5 running TS in watch mode, all together are draining 53% of my total RAM (i'm near 93% used RAM at the moment). this is really a huge issue that should be addressed ASAP, it's really hindering the workflow |
@pocesar I'm limping along by retaining the default small heap size, letting tsc crash, and restarting automatically: while [ 1 ] ; do yarn clean ; yarn tsc -w ; sleep 1 ; done (the |
#17269 should be in |
Thanks, @mhegazy I'm looking at three pages of compiler warning/errors from 2.6 (that compiled successfully with strict mode in 2.5.3). It looks like it's mainly in third-party typings, so it'll take some time to get that sorted. |
|
nope, it even made it worse, the watcher now starts nearly 1GB of RAM each, and each save ramps up. using 20171005
|
@pocesar Wanted to make sure you are using the latest build. While i tried to repro this on larger code base, I noticed that in --diagnostics mode there is too much logging on the command line. Did you notice that? (The fix for that would be in PR #18970). Also I did not see explosion in memory. The memory usage is more than previous version but memory isn't leaking and is being collected.
|
@sheetalkamat yes, I noticed (and missing some new lines, the output is kinda raw and unformatted). I'm almost sure that the problem is happening because I use |
@pocesar i will give it a shot with npm link to see if i am able to repro the memory running out issue to investigate this further |
@pocesar I tried the npm link option and it seem to work ok for me. I am wondering if it would be possible for you to share your code with exact steps (what to npm install/ what to npm link) either here or privately on email (shkamat@microsoft.com) for me to be able to investigate this futher |
I am experiencing this when trying to upgrade from not-that-useful stack trace
|
@shepmaster I tried to reproduce the issue with and without --watch and I am able to build without running into the trace you mentioned. Am I missing something.. I did verify that the tsc version I am running this with is
|
@sheetalkamat interesting! The biggest difference I can see is that you are using Windows while I am using macOS (10.12.6). I am also using Node v8.7.0. My computer has 16GB of RAM, and watching the memory usage of the |
@shepmaster Are you seeing issue only when doing tsc --watch or even without watch? I used computer with 16GB ram too.. node version is v8.6.0 |
@sheetalkamat I am seeing it even without I don't know how having upgraded ts-loader would affect the performance of |
Are the warnings like
Normal / to be expected? It seems strange that there's some kind of configuration that would be overwriting input files... In fact, when I add some options to extend the memory available (
Perhaps I have some poor configuration that is reading my output directory as input files... In fact, I have |
Just a note... This error still happens for me in 2.7.0-dev.20171027. |
@electricessence can you share the project |
@mhegazy https://github.com/electricessence/TypeScript.NET The project compiles in Webstorm for the existing tsconfigs with 2.7. But I'm dependent on gulp to render my distributions and run tests. The area where it gets problematic is in /source/System.Linq where the Linq.ts lib tries to reconcile the extensive interface signature. If I rip out the guts of the interface, it will compile (although still quite slowly relative to 2.3.2) |
@electricessence seems like a separate issue.. |
|
filed #19662 to track that one. |
Ok I will try |
closing for now. please reopen if you are still running into issues. |
I'm following it on #19662, |
TypeScript Version: nightly (2.6.0-dev.20170912)
Code
Expected behavior:
Not crash
Actual behavior:
TS in watch mode is randomly crashing with the above stacktrace. I can't figure out what is making it crash. I'm using vscode, but the watch mode is
tsc -w -p tsconfig.json
per @sandersn recommendation (in #17112 (comment)) opening a new issue about the same error name, but different stack
The text was updated successfully, but these errors were encountered: