Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Source map generation causes build to go out of memory. #9568

Closed
sebakerckhof opened this issue Jan 23, 2018 · 101 comments
Closed

Source map generation causes build to go out of memory. #9568

sebakerckhof opened this issue Jan 23, 2018 · 101 comments
Labels
confirmed We want to fix or implement it Type:Bug

Comments

@sebakerckhof
Copy link
Contributor

sebakerckhof commented Jan 23, 2018

I'm trying to verify if #9552 is fixed in 1.6.1, but it takes forever and then goes out of memory.

Meteor 1.6.0 starts/builds my test-packages command in about 7 minutes.
Meteor 1.6.1 hangs for about 25 minutes on the Build step, and finally breaks down:

| (#4) Profiling: Build App                  /
   Linking                                   -             
<--- Last few GCs --->

[11441:0x2cf8550]  1948045 ms: Mark-sweep 1402.6 (2063.7) -> 1402.6 (2063.7) MB, 908.3 / 0.2 ms  allocation failure GC in old space requested
[11441:0x2cf8550]  1949102 ms: Mark-sweep 1402.6 (2063.7) -> 1402.6 (2011.7) MB, 1056.4 / 0.1 ms  last resort GC in old space requested
[11441:0x2cf8550]  1950043 ms: Mark-sweep 1402.6 (2011.7) -> 1402.6 (2002.2) MB, 940.2 / 0.2 ms  last resort GC in old space requested


<--- JS stacktrace --->

==== JS stack trace =========================================

Security context: 0x286ac22a5ee1 <JSObject>
    2: _parseMappings(aka SourceMapConsumer_parseMappings) [/home/seba/.meteor/packages/meteor-tool/.1.6.1.blw62s.u2x7r++os.linux.x86_64+web.browser+web.cordova/mt-os.linux.x86_64/dev_bundle/lib/node_modules/source-map/lib/source-map-consumer.js:462] [bytecode=0x1e5cf12fdb89 offset=204](this=0x12a7dc78ea9 <BasicSourceMapConsumer map = 0x3c0cf2167301>,aStr=0x1b346517fc79 <Very long string[3090]>,...

FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
 1: node::Abort() [/home/seba/.meteor/packages/meteor-tool/.1.6.1.blw62s.u2x7r++os.linux.x86_64+web.browser+web.cordova/mt-os.linux.x86_64/dev_bundle/bin/node]
 2: 0x121a2cc [/home/seba/.meteor/packages/meteor-tool/.1.6.1.blw62s.u2x7r++os.linux.x86_64+web.browser+web.cordova/mt-os.linux.x86_64/dev_bundle/bin/node]
 3: v8::Utils::ReportOOMFailure(char const*, bool) [/home/seba/.meteor/packages/meteor-tool/.1.6.1.blw62s.u2x7r++os.linux.x86_64+web.browser+web.cordova/mt-os.linux.x86_64/dev_bundle/bin/node]
 4: v8::internal::V8::FatalProcessOutOfMemory(char const*, bool) [/home/seba/.meteor/packages/meteor-tool/.1.6.1.blw62s.u2x7r++os.linux.x86_64+web.browser+web.cordova/mt-os.linux.x86_64/dev_bundle/bin/node]
 5: v8::internal::Factory::NewFixedArray(int, v8::internal::PretenureFlag) [/home/seba/.meteor/packages/meteor-tool/.1.6.1.blw62s.u2x7r++os.linux.x86_64+web.browser+web.cordova/mt-os.linux.x86_64/dev_bundle/bin/node]
 6: v8::internal::HashTable<v8::internal::StringTable, v8::internal::StringTableShape>::NewInternal(v8::internal::Isolate*, int, v8::internal::PretenureFlag) [/home/seba/.meteor/packages/meteor-tool/.1.6.1.blw62s.u2x7r++os.linux.x86_64+web.browser+web.cordova/mt-os.linux.x86_64/dev_bundle/bin/node]
 7: v8::internal::HashTable<v8::internal::StringTable, v8::internal::StringTableShape>::New(v8::internal::Isolate*, int, v8::internal::PretenureFlag, v8::internal::MinimumCapacity) [/home/seba/.meteor/packages/meteor-tool/.1.6.1.blw62s.u2x7r++os.linux.x86_64+web.browser+web.cordova/mt-os.linux.x86_64/dev_bundle/bin/node]
 8: v8::internal::HashTable<v8::internal::StringTable, v8::internal::StringTableShape>::EnsureCapacity(v8::internal::Handle<v8::internal::StringTable>, int, v8::internal::PretenureFlag) [/home/seba/.meteor/packages/meteor-tool/.1.6.1.blw62s.u2x7r++os.linux.x86_64+web.browser+web.cordova/mt-os.linux.x86_64/dev_bundle/bin/node]
 9: v8::internal::StringTable::LookupString(v8::internal::Isolate*, v8::internal::Handle<v8::internal::String>) [/home/seba/.meteor/packages/meteor-tool/.1.6.1.blw62s.u2x7r++os.linux.x86_64+web.browser+web.cordova/mt-os.linux.x86_64/dev_bundle/bin/node]
10: 0x10d434e [/home/seba/.meteor/packages/meteor-tool/.1.6.1.blw62s.u2x7r++os.linux.x86_64+web.browser+web.cordova/mt-os.linux.x86_64/dev_bundle/bin/node]
11: v8::internal::Runtime_KeyedGetProperty(int, v8::internal::Object**, v8::internal::Isolate*) [/home/seba/.meteor/packages/meteor-tool/.1.6.1.blw62s.u2x7r++os.linux.x86_64+web.browser+web.cordova/mt-os.linux.x86_64/dev_bundle/bin/node]
12: 0x15729810463d
Aborted (core dumped)

Running the application normally (meteor run) seems to work.

I did have to modify and include some packages to get 1.6.1 to work due to the babel changes (which turned out to be a very painful upgrade for a patch release IMO):

Other than that, the code is the same.

@sebakerckhof sebakerckhof changed the title 1.6.1 out of memory 1.6.1 test build goes out of memory Jan 23, 2018
@abernix
Copy link
Contributor

abernix commented Jan 23, 2018

  • Does running meteor run --production produce a similar failure?
  • Can you reproduce the same occurrence when running Meteor from a checkout? (i.e. ./meteor run --production)
  • If you provide the Meteor tool with more memory by setting the environment variable TOOL_NODE_FLAGS=--max-old-space-size=4096 (4GB in this case, adjust as your system permits of course), does it eventually finish?
    • i.e. TOOL_NODE_FLAGS=--max-old-space-size=4096 meteor run --production (or ./meteor from a checkout)
  • If it does, can you provide the output of a successful run with the environment variable METEOR_PROFILE=1 set?
    • i.e. METEOR_PROFILE=1 ./meteor run --production

If this is, in-fact, a memory management problem as a result of source-map, I'm certainly intrigued by these recent changes they made in 0.7.0 (we use 0.5.3 in meteor-tool), though bumping 0.7.0 requires some other (breaking) changes to be resolved (feel free to investigate!) 😄

@abernix abernix added this to the Release 1.6.2 milestone Jan 23, 2018
@benjamn
Copy link
Contributor

benjamn commented Jan 23, 2018

If someone wants to investigate upgrading to source-map@0.7.0, I would happily review that pull request! Their usage of Rust/WASM is some of the coolest tech I've seen in a long time!

@hwillson
Copy link
Contributor

If no one else has already started on this, I'll put my ✋ up.

@hwillson hwillson self-assigned this Jan 23, 2018
@benjamn
Copy link
Contributor

benjamn commented Jan 23, 2018

The great news is that this code will only need to run in Node 8, which natively supports WASM:

% meteor node -p WebAssembly.compile
[Function: compile]

@KoenLav
Copy link
Contributor

KoenLav commented Jan 23, 2018

Same issue here, 1.6.0 builds in under 10 minutes, 1.6.1 takes over an hour or renders an out of memory error.

@benjamn
Copy link
Contributor

benjamn commented Jan 23, 2018

If we spent some time instrumenting meteor build with profiling hooks (see meteor/meteor-feature-requests#239), then it would be easier to diagnose the source of this problem.

@benjamn
Copy link
Contributor

benjamn commented Jan 23, 2018

Also, just to be clear, build times of more than a few minutes (even ~10) are unacceptable, and should always be regarded as a bug. These apps aren't written in C++ or Scala or some other language known for long compile times. Something's definitely wrong here.

@jamesmillerburgess
Copy link
Contributor

I also just got a similar error on a rebuild of TINYTEST_FILTER="accounts" ./meteor test-packages.

=> Linted your app. No linting errors.
=> Linted your app. No linting errors.
=> Linted your app. No linting errors.
=> Linted your app. No linting errors.
=> Linted your app. No linting errors.
=> Linted your app. No linting errors.
=> Linted your app. No linting errors.
=> Linted your app. No linting errors.
=> Linted your app. No linting errors.
=> Linted your app. No linting errors.
=> Linted your app. No linting errors.
   Building for web.browser                  -
<--- Last few GCs --->

[89583:0x103000000] 83182677 ms: Mark-sweep 1408.0 (1461.6) -> 1408.0 (1461.6) MB, 1681.8 / 0.1 ms  allocation failure GC in old space requested
[89583:0x103000000] 83184036 ms: Mark-sweep 1408.0 (1461.6) -> 1408.0 (1458.6) MB, 1357.9 / 0.1 ms  last resort GC in old space requested
[89583:0x103000000] 83185383 ms: Mark-sweep 1408.0 (1458.6) -> 1408.0 (1458.6) MB, 1346.6 / 0.1 ms  last resort GC in old space requested


<--- JS stacktrace --->

==== JS stack trace =========================================

Security context: 0x3d42453a5ee1 <JSObject>
    1: stringSlice(aka stringSlice) [buffer.js:~555] [pc=0x24287a4257b6](this=0x3d42b7b82311 <undefined>,buf=0x3d426623f969 <Uint8Array map = 0x3d4255241e99>,encoding=0x3d42453b6d31 <String[4]: utf8>,start=0,end=712006)
    2: toString [buffer.js:~609] [pc=0x24287989163e](this=0x3d426623f969 <Uint8Array map = 0x3d4255241e99>,encoding=0x3d42453b6d31 <String[4]: utf8>,start=0x3d42b7b82311 <undefin...

FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
Abort trap: 6

@benjamn
Copy link
Contributor

benjamn commented Jan 23, 2018

@jamesmillerburgess Do you see any more clues in your stack trace about where the string/buffer slicing might have been happening?

@jamesmillerburgess
Copy link
Contributor

@benjamn Not really, but maybe the error could have something to do with me deleting a project file during a rebuild? I don't remember if this was exactly the build I deleted the file, but that was one of the last things I did before seeing the error.

@hwillson
Copy link
Contributor

With regards to updating source-map to 0.7.0, that might be a bit of a challenge due to this breaking change:

Breaking change: new SourceMapConsumer now returns a Promise object that resolves to the newly constructed SourceMapConsumer instance, rather than returning the new instance immediately.

The Meteor Tool makes a few new SourceMapConsumer calls like:

meteor/tools/fs/files.js

Lines 1091 to 1093 in 1fec23f

var consumer = new sourcemap.SourceMapConsumer(options.sourceMap);
chunks.push(sourcemap.SourceNode.fromStringWithSourceMap(
code, consumer));

Adapting this code to work with Promises is likely not going to be straightforward however, since Fiber yields are blocked by loadIsopackage. This means we don't have a way to get the resolved SourceMapConsumer object before it's needed to feed into others parts of the synchronous Tool code.

Unless I'm missing something? Suggestions are welcome! 🤔

@sebakerckhof
Copy link
Contributor Author

sebakerckhof commented Jan 24, 2018

Does running meteor run --production produce a similar failure?

Yes, didn't check this first since our jenkins was building, but those got the max-old-space-flag set.

Can you reproduce the same occurrence when running Meteor from a checkout? (i.e. ./meteor run --production)

Yes

If you provide the Meteor tool with more memory by setting the environment variable TOOL_NODE_FLAGS=--max-old-space-size=4096 (4GB in this case, adjust as your system permits of course), does it eventually finish?

Yes, and then it finishes in about the same time as 1.6.0 (still about 10 minutes). So the slowdown is probably from the memory getting used up. Might it be that the new uglify-es version in 1.6.1 is producing more errors and therefore we're seeing the old problem with babeli ?

Anyway, there's something in 1.6.1 that results in higher memory usage than 1.6.0

Unfortunately we're working towards a new release of our application and I've got little time to spend on these kind of problems right now. So I'm just reporting my findings now.

One thing (unrelated to the above) I just noticed, is that almost all time in the ProjectContext prepareProjectForBuild step is spend on:
runJavaScript packages/urigo_static-html-compiler.js 38,360 ms (1)
On a total of: Total: 42,481 ms (ProjectContext prepareProjectForBuild)
From what I got of the comments on runJavaScript, this just evaluates the code. So I thought it might be from the npm dependencies in that package, but even if I remove those, it remains that high.

@KoenLav
Copy link
Contributor

KoenLav commented Jan 24, 2018

Interesting!

I'll try upping the memory limit on our builds as well.

Actual build & deploy times for our app used to be around 6 minutes, the whole CircleCI run would take about 10.

It might also be worth noting that we experienced a similar issue when using standard-minifier-js 2.2.3 instead of 2.2.1 in Meteor 1.6.0.

#9430

@abernix
Copy link
Contributor

abernix commented Jan 24, 2018

@hwillson Since Meteor's Promise implementation should be in play, is it possible to just call .await() on the result of the new SourceMapConsumer instantiation while keeping everything else relatively unchanged?

@hwillson
Copy link
Contributor

@abernix I had tried, but unfortunately the fiberHelpers.noYieldsAllowed call in loadIsopackage prevents the await from working:

While loading isopacket `combined`:
/tools/utils/fiber-helpers.js:28:9: Can't call yield in a noYieldsAllowed block!
at Function.disallowedYield (/tools/utils/fiber-helpers.js:28:9)
at stackSafeYield (/Users/hwillson/Documents/git/meteor/meteor/dev_bundle/lib/node_modules/meteor-promise/promise_server.js:101:25)
at awaitPromise (/Users/hwillson/Documents/git/meteor/meteor/dev_bundle/lib/node_modules/meteor-promise/promise_server.js:96:12)
at Function.Promise.await (/Users/hwillson/Documents/git/meteor/meteor/dev_bundle/lib/node_modules/meteor-promise/promise_server.js:56:12)
at Profile.time (/tools/fs/files.js:1092:22)
...

@hwillson
Copy link
Contributor

hwillson commented Jan 24, 2018

Just to add @abernix - removing the noYieldsAllowed call from loadIsopackage fixes the issue (letting Promise.await work), but the no yields block was added in e7167e5 as part of a massive file tool cleanup. Maybe it's being a bit overzealous, but I'm reluctant to change it without knowing the full impact of it being removed. Maybe we can be a bit more selective about what isn't allowed to yield under the loadIsopackage process. That's the path I've been heading down ...

P.S. > Working through the isobuild code has me yielding for coffee ... a lot. 🙂

@sebakerckhof sebakerckhof changed the title 1.6.1 test build goes out of memory 1.6.1 build goes out of memory Jan 25, 2018
@abernix
Copy link
Contributor

abernix commented Jan 25, 2018

Hmm. Commit e7167e5 leaves a lot to be understood (grr, squash!) about the reasoning for the noYieldsAllowed being in the place that it is (it wasn't there before!). The (squashed) commit message hints: Make sure JsImage readFromDisk doesn't yield. With that in mind, it might be reasonable to move the noYieldsAllowed to wrap the bundler.readJsImage call in loadIsopacketFromDisk rather than also enveloping the load call (which I think is where you're running into the yield being prevented):

var image = bundler.readJsImage(
files.pathJoin(isopacketPath(isopacketName), 'program.json'));

I think that would allow the runJavaScript at

files.runJavaScript(item.source.toString('utf8'), {
filename: item.targetPath,
symbols: env,
sourceMap: item.sourceMap,
sourceMapRoot: item.sourceMapRoot
});
to permit the await() you're needing to add in this case. It may need to be more surgical, but I'd be curious how the test suite behaves with that change. 😸

@hwillson
Copy link
Contributor

Thanks @abernix - that's exactly where I'm headed, and things are looking promising. I'm prepping a PR, so we'll be able to kick the tires shortly.

@KoenLav
Copy link
Contributor

KoenLav commented Jan 25, 2018

Promising, hehe.

Great to see there's progress here, would love to try it out, currently abroad but planning on testing the latest changes to Cordova and the build process next week.

@PolGuixe
Copy link

My case:

  • Version: 1.6.0.1
  • MacOS 10.13.2

This is a weird error. On a new machine - MBP 15" 2017 -, when trying to deploy to make a build -deploy to Galaxy, --production flag, ...- it runs out of memory:

   Minifying app code                        \
<--- Last few GCs --->

[1225:0x102801e00]   257826 ms: Mark-sweep 1390.5 (1546.1) -> 1390.5 (1546.1) MB, 1153.5 / 0.1 ms  allocation failure GC in old space requested
[1225:0x102801e00]   259049 ms: Mark-sweep 1390.5 (1546.1) -> 1390.5 (1507.1) MB, 1222.2 / 0.1 ms  last resort GC in old space requested
[1225:0x102801e00]   260306 ms: Mark-sweep 1390.5 (1507.1) -> 1390.5 (1501.6) MB, 1256.9 / 0.1 ms  last resort GC in old space requested


<--- JS stacktrace --->

==== JS stack trace =========================================

Security context: 0x1b75197a5ee1 <JSObject>
    0: builtin exit frame: slice(this=0x1b751424c9a1 <JSArray[1232183]>)

    1: clone [/Users/polguixe/.meteor/packages/standard-minifier-js/.2.2.3.fd3562.vtpy++os+web.browser+web.cordova/plugin.minifyStdJS.os/npm/node_modules/meteor/babel-compiler/node_modules/babylon/lib/index.js:~630] [pc=0x71f45e352f3](this=0x1b751424c8f9 <State map = 0x1b759956f021>,skipArrays=0x1b7508f02311 <undefined>)
 ...

FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory

However, the same exact project builds on older machines without issues 🤷‍♂️ :

  • MBP 13” Late 2013 MacOS 10.13.2
  • Mac Mini 2013
  • MBP 13” Late 2011

Any idea? The logic tells me that must be something with the new machine, however, I have no idea... the only change we did to the older machines was related to this issue which seems to be solved on newer versions of Meteor #6952

@KoenLav
Copy link
Contributor

KoenLav commented Jan 28, 2018

Upping max-old-space-size to 3GB in CircleCI resolves the issue, the build now completes in +/- 12 minutes on a free instance.

To test this add yourself on CircleCI (v2):

TOOL_NODE_FLAGS: --max-old-space-size=3072

to the environment within your image.

@RobertLowe
Copy link
Contributor

I had this happen to me, meteor ran fine locally on my MBR, however once pushed the app to heroku to build with meteor-buildpack-horse it continually exploded. I was able to resolved the issue by updating the app from 1.6.0.1 to 1.6.1 it appears that 1.6.1 building 1.6.0.1 apps has issues. I'm sorry to the yak I shaved.

@KoenLav
Copy link
Contributor

KoenLav commented Jan 29, 2018

@benjamn I've read the details about WebAssembly and if my interpretation is correct this could soon open up the possibility of (near) native performance of Javascript applications.

Just checking here, before I create a FR: Meteor could, at some point, update its build proces to compile the bundle into WebAssembly and serve that alongside the current bundle, allowing browsers which support WebAssembly to run Meteor at (near) native speed.

I'm probably missing some steps still, but aside from that this would be possible, or not?

@benjamn
Copy link
Contributor

benjamn commented Jan 29, 2018

@KoenLav Compiling JavaScript to WebAssembly isn't something people are likely to do anytime soon, because WebAssembly has no built-in garbage collection, and JS is a language that relies on GC. Languages like Rust and C++ that don't rely on GC are currently the most attractive options for compiling to WebAssembly.

Also, JavaScript works as-is in the browser, so there's no need to compile it to something else (besides the usual compilation of non-native syntax, a la Babel).

@macrozone
Copy link
Contributor

the issue is still a problem

@sebakerckhof
Copy link
Contributor Author

@macrozone Can you test with 1.9-rc.2 which includes #10798 ?

benjamn pushed a commit to sebakerckhof/meteor that referenced this issue Jan 13, 2020
This commit implements an --exclude-archs arg to the run command
which allows you to pass a comma-seperated list of architectures
to exclude.

Motivation:
Improve rebuild speeds, by allowing to leave out certain
web architectures during development.

Meteor rebuild speeds, although greatly improved in recent versions,
is still a common point of frustration as seen in the issues (meteor#10800)
and on the forum (https://forums.meteor.com/t/2x-to-3x-longer-build-time-also-on-testing-since-1-8-2-update/51028/3)

One reason for slow rebuilds is high memory pressure (meteor#9568).
The web.browser.legacy architecture is currently always being build
while most of us will only test this every once in a while.
So while we don't need it most of times during development
it does add to the high memory usage.
Furthermore, even though these builds are delayed,
long-running synchronous tasks have the potential to block the server startup (meteor#10261).
This last issue can be partially, but not fully, improved by meteor#10427
Therefore a commonly requested feature has been to disable the legacy build (meteor/meteor-feature-requests#333)

However, we may also temporarily disable cordova builds
and we may even want to disable just the web.browser build if we're just debugging
a bug in the legacy build.
Therefore I chose to create the more generic --exclude-archs option.
@opyh
Copy link

opyh commented Jun 11, 2020

I still see a problem here - Tested with v1.10.2 and TOOL_NODE_FLAGS "--max-old-space-size=2048 --optimize_for_size --gc-interval=100":

It seems switching off the GC in source-maps generation (WASM, #10798 (comment)) causes a spike at the end of the build that goes over the 2048M value you'd expect with this setting.

Screenshot 2020-06-11 at 18 13 45

Apparent options for people stumbling upon this (correct me if I'm wrong):

  • Increase VM size for builds to 3Gi+ (making memory dependent on your bundle size, seems odd)
  • Fix FATAL archived threads in combination with wasm not supported nodejs/node#29767 in Node / node-fibers so we can remove the --no-wasm-code-gc flag
  • Find a different way to generate source maps in Meteor
  • Add a (automatic?) switch between no / fast-method / slow-method source maps generation in meteor build so you can choose between more speed and more memory usage

@sebakerckhof
Copy link
Contributor Author

@opyh This issue stems from a time when we were using the none WASM version. The slow / non-wasm version was using at least as much memory. The wasm comes from Rust code which does its own memory magement. GC isn't likely to help here.

So that leaves option 1 & 3.

@sebakerckhof
Copy link
Contributor Author

Maybe we could use: https://github.com/parcel-bundler/source-map

@opyh
Copy link

opyh commented Jun 11, 2020

In the meantime, how about displaying a helpful warning before source maps generation?

It’s a frustrating kind of error as it happens at the very end of the build process, and it is very likely to happen on typical CI setups. You get no sensible error message (or none at all depending on your Docker build tool). And when you stumble upon it, it’s unclear where to start searching.

Theoretically, the build process could even foresee the issue, and abort the build on start, before you wait 10 minutes for an error message to appear.

Suggestion:

console.log(`
  Your build machine has ${availableMemoryInMegabytes}M memory available for building.

  If this process should abort with an out-of-memory (OOM) or “non-zero exit code 137”
  error message, please increase the machine’s available memory.

  See https://github.com/meteor/meteor/issues/9568 for details.
`)
`

@opyh
Copy link

opyh commented Jun 11, 2020

Is there a way to limit used memory in the Rust code? I think it's generally a good idea to have less non-wasm native dependencies to be more future-proof.

Maybe related:

mozilla/source-map#412
mozilla/source-map#342

opyh added a commit to opyh/meteor-base that referenced this issue Jun 12, 2020
Meteor’s source map generation uses no GC, which can produce out-of-memory errors in CI environments with <4G RAM.

If this happens, the reason for killed containers can be difficult to track down. Some environments display a ‘OOMError’, some show a cryptic error message (‘non-zero exit code 137’), some show no error at all.

This adds a warning so you have a clue from the logs when the container is killed.

Related Meteor issue: meteor/meteor#9568
opyh added a commit to opyh/meteor-base that referenced this issue Jun 12, 2020
Meteor’s source map generation uses no GC, which can produce out-of-memory errors in CI environments with <4G RAM.

If this happens, the reason for killed containers can be difficult to track down. Some environments display a ‘OOMError’, some show a cryptic error message (‘non-zero exit code 137’), some show no error at all.

This adds a warning so you have a clue from the logs when the container is killed.

Related Meteor issue: meteor/meteor#9568
@opyh
Copy link

opyh commented Jun 18, 2020

Another idea: Are fibers absolutely necessary for this build step? Couldn't the build process generate them in separate child processes that don't use fibers, for example?

@opyh
Copy link

opyh commented Oct 26, 2020

#11143 looks like this might be a bigger issue in the near future - if more node modules begin to use WASM, you won't be able to use some of them without OOM crashes in Meteor.

@filipenevola filipenevola removed this from the Package Patches milestone Nov 7, 2020
GeoffreyBooth pushed a commit to opyh/meteor-base that referenced this issue Dec 30, 2020
Meteor’s source map generation uses no GC, which can produce out-of-memory errors in CI environments with <4G RAM.

If this happens, the reason for killed containers can be difficult to track down. Some environments display a ‘OOMError’, some show a cryptic error message (‘non-zero exit code 137’), some show no error at all.

This adds a warning so you have a clue from the logs when the container is killed.

Related Meteor issue: meteor/meteor#9568
@naturom
Copy link

naturom commented May 3, 2021

I'm also getting this issue while building production meteor 2.1

@Grubba27
Copy link
Contributor

I`m closing this issue if this problem show up again I will be really happy to reopen this issue :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
confirmed We want to fix or implement it Type:Bug
Projects
None yet
Development

No branches or pull requests