New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Webpack's Performance at Airbnb #5718

Closed
gdborton opened this Issue Sep 25, 2017 · 79 comments

Comments

Projects
None yet
@gdborton
Contributor

gdborton commented Sep 25, 2017

TLDR: We want to improve webpack's performance in our development environment, and after investigating many performance options we are looking for direction on how to make progress. We want to bring our cached production build and our webpack development server boot times down from 2m58s and 2m28s to ~15 seconds.

NOTE: Please don't close the issue because I deleted the template, if it's important leave a comment and I'll add the template w/ answers to this issue.

Some background on our build:

We have 4 build configurations

Name Description Chunks Entrypoints Modules
Client Builds our bundles with the browser in mind. 710 587 15314
Server Builds a subset of our client bundles with slightly different ways of handling async loading, externals, etc. 259 259 11152
Embed Builds a few files meant to be embedded on non airbnb sites (no externals, commons chunks, or code-splitting) 10 10 1151
AMP Builds a few amp specific files to help ensure we're AMP compliant (I'm sure compliant is the wrong term). 2 2 1928

Current production build timings

A full build with no babel/happypack cache and without uglify takes: ~4m31s

A full build with a cache, but no uglify takes: ~2m58s

Timing for restarting the webpack server and loading our homepage

These are measured in our ci system, which is slower than our actual development environment.

  • Current Master - full page load, 147673.2 ms (~2m28s)

Optimizations that we've investigated

Production build optimizations already in place

  • happypack - This saves us time in our production builds, we parallelize/cache our babel handlebars transforms. We don't use this in our devServer as babel-loader has a built in cache and that is far and away our most expensive loader.
  • webpack-parallel-uglify-plugin - This parallelizes/caches the uglification process. On a fully cached build this takes ~3 seconds.

Development server optimizations already in place

  • We boot our dev server with only a small subset of our build in place as it's unlikely a developer will need to compile all of airbnb.com locally. We start with any bundles needed for our CommonsChunkPlugin, and add addtional bundles to the compiler as they're needed. This helps significantly to reduce memory pressure as well as initial boot time, and rebuilds.
  • Any server bundles that are added to the server compiler automatically trigger a compile for client side equivalents, it's assumed if you server render something, you'll also want the client code.

Build optimizations that help dev and prod

  • Disabled symlink resolving, this helped both our dev boot times as well as our production build times.
  • babel-loader cache - This is also beneficial for production builds, but is hugely helpful in development as well.
  • devtool: 'cheap-module-eval-source-map'

Other stuff that we've tried

Our primary focus has been on dev server boot up time, as we believe this will have the highest impact on our developers productivity.

  • hard-source-webpack-plugin - We tried this plugin during its 0.4 releases. It was very promising at the time, shaving 25% off of our dev server boot time. We encountered lots of random caching errors though, and it often resulted in manually deleting the cache and restarting the server to correct. Also, we recently revisited this plugin and didn't see a performance improvement in our local environment, it's impact may have been dimished by recent webpack performance work. We also have not revisited this plugin's newer releases.
  • cache-loader - Reduces our dev server boot up time by ~3%. We assume this would have a larger impact without babel-loader's built in cache.
  • thread-loader - Increased our dev server boot up time by ~57%.
  • dll plugin - At the time of investigating this was incompatible with the CommonsChunkPlugin, this may have changed. We didn't see much performance improvement here, and didn't think it was worth the risk of development/production build differences.

Things we've considered but haven't tried

  • Disabling CommonsChunkPlugin in development, manually tracking module dependencies for bundles, and caching this information on disk. This would help our boot time as we could cache build results, and verify that they're still correct based on config/module changes. This comes at a cost though as bundles can't be compiled correctly (matching production) without also compiling with CommonsChunk targets in place.

Profiling numbers

Again, these are the times that it would take on average to restart our development server and load up our homepage.

  • thread-loader full page load, 232344.2ms (3m52s)
  • cache-loader full page load, 137157.8ms (2m16s)
  • hard-source-webpack-plugin full page load, 85980.6ms (1m25s)

Goals

What we REALLY want to see is our webpack development server rebooting and serving requests (with a warmed cache) within 10 seconds. The 10 second number is arbitrary, but I think a good heuristic would be the time it takes to read all the modules from disk + some % of that time.

Looking over our modules for our client config, it took 1337.9 ms to read in and compute a sha265 hash for each module. If we assume for configuration differences that each of our configs needs to process all of their modules independently (there is major overlap), we can come up with an overall time of 2581ms (1337.9 + 974.3 + 100.6 + 168.4). Let's also assume that we need to read them twice (once for calculating a cacheable hash, and again for reading from cache), and that we're caching source maps so reading from cache will take twice as long. That'll put our build at 7743ms.

If we add in some overhead for verifying previous module resolutions, let's say 2 seconds assuming some improvements here based on this RFC issue. Build would now be at 9743ms.

Of course we need to account for bundle concatenation, module deduplication, scope hoisting, etc. but I believe that this is not where the bulk of the work is happening. Let's put another 50% on our build time.

I believe that 14,614ms should be a realistic upper limit for a cached build. Probably 20 seconds for a mostly cached build with a few files changed (this will prevent only optimizing for a full cache hit).

After many profiles, there don't seem to be many items left in terms of low hanging performance improvements.

What's the best way for us to contribute to making this a reality?

Additional considerations

We were able to build a tool internally as a proof of concept regarding maximum performance. Without a significant effort investment we were able to configure a simple bundler/server combo that compiled our entire build (incorrectly, so grain of salt) in ~20 seconds.

We did this basically by moving module transforming/parsing to worker threads and caching the results of the transforms. The main thread then builds the bundle dependency trees, and sends a sort of manifest to the workers for them to compile the final output.

I think that this approach would lead to the best performance possible, but it definitely comes with caveats. For one thing, instances cannot be shared amoung workers, so data passed to them must be serializable.

@sokra sokra added this to the webpack 4 milestone Sep 26, 2017

@sokra sokra added the performance label Sep 26, 2017

@gdborton

This comment has been minimized.

Show comment
Hide comment
@gdborton

gdborton Sep 26, 2017

Contributor

❤️ the v4 milestone, but is there any way that we can help push this forward? I think that converting loaders/ast parsing to be purely functional, caching their results, and running them in workers by default get us a lot of the way there. I'd have no idea how to handle this though.

Contributor

gdborton commented Sep 26, 2017

❤️ the v4 milestone, but is there any way that we can help push this forward? I think that converting loaders/ast parsing to be purely functional, caching their results, and running them in workers by default get us a lot of the way there. I'd have no idea how to handle this though.

@graingert

This comment has been minimized.

Show comment
Hide comment
@graingert

graingert Sep 27, 2017

We boot our dev server with only a small subset of our build in place as it's unlikely a developer will need to compile all of airbnb.com locally. We start with any bundles needed for our CommonsChunkPlugin, and add addtional bundles to the compiler as they're needed. This helps significantly to reduce memory pressure as well as initial boot time, and rebuilds.

is the source for this anywhere?

graingert commented Sep 27, 2017

We boot our dev server with only a small subset of our build in place as it's unlikely a developer will need to compile all of airbnb.com locally. We start with any bundles needed for our CommonsChunkPlugin, and add addtional bundles to the compiler as they're needed. This helps significantly to reduce memory pressure as well as initial boot time, and rebuilds.

is the source for this anywhere?

@graingert

This comment has been minimized.

Show comment
Hide comment
@graingert

graingert Sep 27, 2017

is this a dupe of #1905?

graingert commented Sep 27, 2017

is this a dupe of #1905?

@gdborton

This comment has been minimized.

Show comment
Hide comment
@gdborton

gdborton Sep 28, 2017

Contributor

It's not public, this was something that was developed ad-hoc internally to get more oomph out of the server.

It's certainly similar as both issues are about performance, but I like that this one includes specific performance goals. Also the scale for this issue is about 20x larger.

Contributor

gdborton commented Sep 28, 2017

It's not public, this was something that was developed ad-hoc internally to get more oomph out of the server.

It's certainly similar as both issues are about performance, but I like that this one includes specific performance goals. Also the scale for this issue is about 20x larger.

@ljharb

This comment has been minimized.

Show comment
Hide comment
@ljharb

ljharb Sep 28, 2017

@sokra @TheLarkInn any update here? it'd be great to explore paths to making webpack performant enough for @airbnb's use cases :-)

ljharb commented Sep 28, 2017

@sokra @TheLarkInn any update here? it'd be great to explore paths to making webpack performant enough for @airbnb's use cases :-)

@rafde

This comment has been minimized.

Show comment
Hide comment
@rafde

rafde Sep 29, 2017

Contributor

#5660 has performance updates. Maybe pulling that branch could help test perf changes. That or wait for beta tag.

Contributor

rafde commented Sep 29, 2017

#5660 has performance updates. Maybe pulling that branch could help test perf changes. That or wait for beta tag.

@gdborton

This comment has been minimized.

Show comment
Hide comment
@gdborton

gdborton Sep 29, 2017

Contributor

I'll give it a shot and see how it impacts our build.

Contributor

gdborton commented Sep 29, 2017

I'll give it a shot and see how it impacts our build.

@evilebottnawi

This comment has been minimized.

Show comment
Hide comment
@evilebottnawi

evilebottnawi Oct 5, 2017

Member

About uglifyjs-webpack-plugin, i have PR that fix cache and parallels webpack-contrib/uglifyjs-webpack-plugin#108, it is allow to replace webpack-parallel-uglify-plugin on basic plugin and welcome to feedback.

Member

evilebottnawi commented Oct 5, 2017

About uglifyjs-webpack-plugin, i have PR that fix cache and parallels webpack-contrib/uglifyjs-webpack-plugin#108, it is allow to replace webpack-parallel-uglify-plugin on basic plugin and welcome to feedback.

@evilebottnawi

This comment has been minimized.

Show comment
Hide comment
@evilebottnawi

evilebottnawi Oct 5, 2017

Member

Also about cache-loader, he have was merged PR (webpack-contrib/cache-loader#5) that avoid fs calls and it is should be improve perf, but now new version with merged PR will not publish.

Member

evilebottnawi commented Oct 5, 2017

Also about cache-loader, he have was merged PR (webpack-contrib/cache-loader#5) that avoid fs calls and it is should be improve perf, but now new version with merged PR will not publish.

@gdborton gdborton referenced this issue Oct 6, 2017

Closed

[WIP] Make module.build run in parallel. #5773

0 of 6 tasks complete
@d3viant0ne

This comment has been minimized.

Show comment
Hide comment
@d3viant0ne

d3viant0ne Oct 10, 2017

Member

@gdborton - uglifyjs-webpack-plugin@1.0.0-beta.3 includes the performance improvements mentioned by @evilebottnawi

Member

d3viant0ne commented Oct 10, 2017

@gdborton - uglifyjs-webpack-plugin@1.0.0-beta.3 includes the performance improvements mentioned by @evilebottnawi

@gdborton

This comment has been minimized.

Show comment
Hide comment
@gdborton

gdborton Oct 10, 2017

Contributor

The uglify portion of the build is already very fast due to our caching w/ webpack-parallel-uglify-plugin, but it would certainly be nice to be back on the official plugin, but I don't think it'll get us any performance benefits.

Contributor

gdborton commented Oct 10, 2017

The uglify portion of the build is already very fast due to our caching w/ webpack-parallel-uglify-plugin, but it would certainly be nice to be back on the official plugin, but I don't think it'll get us any performance benefits.

@lencioni

This comment has been minimized.

Show comment
Hide comment
@lencioni

lencioni Oct 13, 2017

Contributor

Also, just a little more context. We are currently attempting to enable the ModuleConcatenationPlugin, but are having problems running out of memory. Without this plugin, we are able to build by giving webpack 8 GiB of memory (--max_old_space_size=8192), but when we enable this plugin after about 30 minutes of chugging along, we run out of memory even after bumping it to 30 GiB (--max_old_space_size=30720).

Not sure if this is helpful, but here is some of the output:

<--- Last few GCs --->

 1493642 ms: Mark-sweep 28067.1 (30766.7) -> 28067.1 (30766.7) MB, 3555.4 / 0.0 ms [allocation failure] [scavenge might not succeed].
 1497169 ms: Mark-sweep 28067.1 (30766.7) -> 28067.1 (30766.7) MB, 3522.8 / 0.0 ms [allocation failure] [scavenge might not succeed].
 1500673 ms: Mark-sweep 28067.1 (30766.7) -> 28080.9 (30742.7) MB, 3500.5 / 0.0 ms [last resort gc].
 1504325 ms: Mark-sweep 28080.9 (30742.7) -> 28097.6 (30738.7) MB, 3647.8 / 0.0 ms [last resort gc].


<--- JS stacktrace --->

==== JS stack trace =========================================

Security context: 0x25a46f43fa99 <JS Object>
    1: DoJoin(aka DoJoin) [native array.js:~129] [pc=0xc6fd4d20abc] (this=0x25a46f404241 <undefined>,w=0x27f6d75ffec9 <JS Array[78]>,x=78,N=0x25a46f404281 <true>,J=0x25a46f44a369 <String[1]:  >,I=0x8df922bb911 <JS Function ConvertToString (SharedFunctionInfo 0x25a46f45dbc9)>)
    2: Join(aka Join) [native array.js:180] [pc=0xc6fd11852d2] (this=0x25a46f404241 <undefined>,w=0x27f6d75ffec9 <JS Arr...
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
 1: node::Abort() [node]
 2: 0x10aa4ac [node]
 3: v8::Utils::ReportApiFailure(char const*, char const*) [node]
 4: v8::internal::V8::FatalProcessOutOfMemory(char const*, bool) [node]
 5: v8::internal::Factory::NewRawTwoByteString(int, v8::internal::PretenureFlag) [node]
 6: v8::internal::Runtime_StringBuilderJoin(int, v8::internal::Object**, v8::internal::Isolate*) [node]
 7: 0xc6fd11060c7
Aborted

Hopefully whatever work we end up doing to improve the speed of webpack will also help with the memory when this plugin is enabled, but it might be possible that we need to look at improvements in this plugin specifically as well.

Contributor

lencioni commented Oct 13, 2017

Also, just a little more context. We are currently attempting to enable the ModuleConcatenationPlugin, but are having problems running out of memory. Without this plugin, we are able to build by giving webpack 8 GiB of memory (--max_old_space_size=8192), but when we enable this plugin after about 30 minutes of chugging along, we run out of memory even after bumping it to 30 GiB (--max_old_space_size=30720).

Not sure if this is helpful, but here is some of the output:

<--- Last few GCs --->

 1493642 ms: Mark-sweep 28067.1 (30766.7) -> 28067.1 (30766.7) MB, 3555.4 / 0.0 ms [allocation failure] [scavenge might not succeed].
 1497169 ms: Mark-sweep 28067.1 (30766.7) -> 28067.1 (30766.7) MB, 3522.8 / 0.0 ms [allocation failure] [scavenge might not succeed].
 1500673 ms: Mark-sweep 28067.1 (30766.7) -> 28080.9 (30742.7) MB, 3500.5 / 0.0 ms [last resort gc].
 1504325 ms: Mark-sweep 28080.9 (30742.7) -> 28097.6 (30738.7) MB, 3647.8 / 0.0 ms [last resort gc].


<--- JS stacktrace --->

==== JS stack trace =========================================

Security context: 0x25a46f43fa99 <JS Object>
    1: DoJoin(aka DoJoin) [native array.js:~129] [pc=0xc6fd4d20abc] (this=0x25a46f404241 <undefined>,w=0x27f6d75ffec9 <JS Array[78]>,x=78,N=0x25a46f404281 <true>,J=0x25a46f44a369 <String[1]:  >,I=0x8df922bb911 <JS Function ConvertToString (SharedFunctionInfo 0x25a46f45dbc9)>)
    2: Join(aka Join) [native array.js:180] [pc=0xc6fd11852d2] (this=0x25a46f404241 <undefined>,w=0x27f6d75ffec9 <JS Arr...
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
 1: node::Abort() [node]
 2: 0x10aa4ac [node]
 3: v8::Utils::ReportApiFailure(char const*, char const*) [node]
 4: v8::internal::V8::FatalProcessOutOfMemory(char const*, bool) [node]
 5: v8::internal::Factory::NewRawTwoByteString(int, v8::internal::PretenureFlag) [node]
 6: v8::internal::Runtime_StringBuilderJoin(int, v8::internal::Object**, v8::internal::Isolate*) [node]
 7: 0xc6fd11060c7
Aborted

Hopefully whatever work we end up doing to improve the speed of webpack will also help with the memory when this plugin is enabled, but it might be possible that we need to look at improvements in this plugin specifically as well.

@TheLarkInn

This comment has been minimized.

Show comment
Hide comment
@TheLarkInn

TheLarkInn Oct 13, 2017

Member

@ljharb The PR that @gdborton has open I'm keeping a close eye. I believe that the cases described in the initial writeup can be solved through parallelization at the Module#build level like Gary suggests in his PR notes.

Member

TheLarkInn commented Oct 13, 2017

@ljharb The PR that @gdborton has open I'm keeping a close eye. I believe that the cases described in the initial writeup can be solved through parallelization at the Module#build level like Gary suggests in his PR notes.

@TheLarkInn

This comment has been minimized.

Show comment
Hide comment
@TheLarkInn

TheLarkInn Oct 13, 2017

Member

In regards to the errors you are seeing @lencioni, would you be able to provide a flamechart or profile trace? I'm curious if this is sheer size of the dep graph in memory or if there is a leak somewhere.

Member

TheLarkInn commented Oct 13, 2017

In regards to the errors you are seeing @lencioni, would you be able to provide a flamechart or profile trace? I'm curious if this is sheer size of the dep graph in memory or if there is a leak somewhere.

@udivankin

This comment has been minimized.

Show comment
Hide comment
@udivankin

udivankin Nov 6, 2017

@gdborton can I ask if you run webpack on host machine or inside some kind of VM container in dev environment?

udivankin commented Nov 6, 2017

@gdborton can I ask if you run webpack on host machine or inside some kind of VM container in dev environment?

@gdborton

This comment has been minimized.

Show comment
Hide comment
@gdborton

gdborton Nov 6, 2017

Contributor

Either or, doesn't seem to affect build times at all.

Contributor

gdborton commented Nov 6, 2017

Either or, doesn't seem to affect build times at all.

@jcrben

This comment has been minimized.

Show comment
Hide comment
@jcrben

jcrben Nov 6, 2017

Not to distract from the main question, but I guess that the incremental rebuild performance isn't an issue for Airbnb? Seems like anything over 1 second is pretty annoying, and I've struggled to keep it down.

jcrben commented Nov 6, 2017

Not to distract from the main question, but I guess that the incremental rebuild performance isn't an issue for Airbnb? Seems like anything over 1 second is pretty annoying, and I've struggled to keep it down.

@udivankin

This comment has been minimized.

Show comment
Hide comment
@udivankin

udivankin Nov 7, 2017

@gdborton actually it does. In our case cold build is about 1/4 slower while incremental build adds noticeable fixed lag on top. There could be some gotchas which I believe one who uses such config faced and solved.

udivankin commented Nov 7, 2017

@gdborton actually it does. In our case cold build is about 1/4 slower while incremental build adds noticeable fixed lag on top. There could be some gotchas which I believe one who uses such config faced and solved.

@gdborton

This comment has been minimized.

Show comment
Hide comment
@gdborton

gdborton Nov 7, 2017

Contributor

@udivankin I can't speak to your build times, but running webpack on a local machine vs an aws dev instance doesn't significantly impact our builds. Especially not in any meaningful way that would solve this problem for us.

@jcrben Incremental build times can always be faster, but it definitely isn't our fire atm.

Contributor

gdborton commented Nov 7, 2017

@udivankin I can't speak to your build times, but running webpack on a local machine vs an aws dev instance doesn't significantly impact our builds. Especially not in any meaningful way that would solve this problem for us.

@jcrben Incremental build times can always be faster, but it definitely isn't our fire atm.

@gdborton

This comment has been minimized.

Show comment
Hide comment
@gdborton

gdborton Nov 15, 2017

Contributor

These are some of the primary performance talking points from a meeting that @iancmyers and myself had with @TheLarkInn on 11/10. I figured I would share in case it helps generate more performance ideas.

Parallelize Resolving

Rather than pushing the parallel Module.build PR forward, we decided it'd be a smaller/easier starting place to parallelize resolving of modules/loaders.

  • I worked on this with @lencioni on Monday, but didn't see any performance benefit, we actually saw a regression.
  • I think the confusion for how much this would save was due to the call stack being recursive, so it shows resolve as taking more time than it actually does. Resolving doesn't seem to be cpu intensive so we didn't see the benefit we expected.
  • If the callback logic (loaders/parsing) could also be moved to child workers, we'd still see the benefit here.

Here is the commit that does this work if anyone wants to try it out.

Loaders Returning ASTs

It was mentioned that webpack recently added support for returning an AST from a loader to prevent reparsing an ast if possible. The assumed benefit of this is that cache-loader/thread-loader would be able to help prevent the main thread from doing this work.

In my experience, parsing source => ast is much faster than JSON.parse()ing a serialized ast.

Based on this, I decided to compare cost of parse vs reading ast from disk:

  • I modified webpack/Parser.js to output source/acorn parse options and the resulting asts to disk, then wrote this gist to compare acorn.parse vs JSON.parse on an existing AST
  • results:
~/code/ast-size-exploration/airbnb 19s
❯ node --max_old_space_size=8000 ast-comparison.js
found 17193  source files to parse
parsing all the files: 6509.396ms
found 17193  ast files to parse
json parsing asts: 21846.787ms

^ JSON.parse of ast is super slow. Returning it from a loader that is parallelized will slow down builds, better to parse in the main thread or move all parser logic to workers, and just store source/map combos which can then be used to quickly rebuild an ast.

Compare webpack@3.8.1 vs webpack-next (4)

There have been a few performance improvements shipped in the next branch of webpack (v4).

webpack-next is slightly slower for us, the only blockers we have are around happypack and an internal loader/plugin combo. Both easily fixed.

  • I created this script to install webpack@3.8.1, then the current (at the time) head of the next branch and compare their timings.
  • webpack@3.8.1 - 542 seconds
  • b7c746d - 559 seconds

Unique Server Setup Causing Redundant Work?

We use some custom express middlware to trim our webpack configs, then progressively add to our compilers as bundles are requested. Sean pointed out that we might be interrupting our initial builds and rebuilding modules unnecessarily.

  • I modified Parser.js to hash contents of files it's called against, and keep track of the number of times it's called against a given hash, then added a logging function to count the number of hashes that have been built any number of times.
  • Building our core flow at server boot shows this...
{
  '1': 776,
  '2': 3486,
  '3': 632,
  '4': 255,
  ...
}
  • Requesting an unbuilt bundle then logging shows this...
{
  '2': 778,
  '3': 7,
  '4': 3481,
  '5': 2,
  '6': 631,
  '7': 1,
  '8': 255,
  ...
}

After digging in a bit more, I found that webpack uses filetimestamps vs buildtimestamps to determine whether it needs to rebuild something. This works great when the timestamps are available, but causes issues in our case since timestamps are applied by the watcher and only when a file changes (changing a single file adds timestamp info for all files known by webpack).

Use Asap?

We didn't talk about this in the meeting, but @lencioni mentioned that asap might have better perf/call stacks than use process.nextTick.

I replaced all the calls I could find, and didn't see a noticeable difference in build times or the appearance of the call stack. Also, time spent in (program) (afaik that means idle CPU) didn't decrease.

Remaining TODOs on my end:

Setup a small demo repo show casing the filetimestamp issue we're running into. This will show the community what our progressive server looks like, and also allow us to file a perf bug against webpack. Will update this comment with relevant links.

cc: @ljharb @sokra

Contributor

gdborton commented Nov 15, 2017

These are some of the primary performance talking points from a meeting that @iancmyers and myself had with @TheLarkInn on 11/10. I figured I would share in case it helps generate more performance ideas.

Parallelize Resolving

Rather than pushing the parallel Module.build PR forward, we decided it'd be a smaller/easier starting place to parallelize resolving of modules/loaders.

  • I worked on this with @lencioni on Monday, but didn't see any performance benefit, we actually saw a regression.
  • I think the confusion for how much this would save was due to the call stack being recursive, so it shows resolve as taking more time than it actually does. Resolving doesn't seem to be cpu intensive so we didn't see the benefit we expected.
  • If the callback logic (loaders/parsing) could also be moved to child workers, we'd still see the benefit here.

Here is the commit that does this work if anyone wants to try it out.

Loaders Returning ASTs

It was mentioned that webpack recently added support for returning an AST from a loader to prevent reparsing an ast if possible. The assumed benefit of this is that cache-loader/thread-loader would be able to help prevent the main thread from doing this work.

In my experience, parsing source => ast is much faster than JSON.parse()ing a serialized ast.

Based on this, I decided to compare cost of parse vs reading ast from disk:

  • I modified webpack/Parser.js to output source/acorn parse options and the resulting asts to disk, then wrote this gist to compare acorn.parse vs JSON.parse on an existing AST
  • results:
~/code/ast-size-exploration/airbnb 19s
❯ node --max_old_space_size=8000 ast-comparison.js
found 17193  source files to parse
parsing all the files: 6509.396ms
found 17193  ast files to parse
json parsing asts: 21846.787ms

^ JSON.parse of ast is super slow. Returning it from a loader that is parallelized will slow down builds, better to parse in the main thread or move all parser logic to workers, and just store source/map combos which can then be used to quickly rebuild an ast.

Compare webpack@3.8.1 vs webpack-next (4)

There have been a few performance improvements shipped in the next branch of webpack (v4).

webpack-next is slightly slower for us, the only blockers we have are around happypack and an internal loader/plugin combo. Both easily fixed.

  • I created this script to install webpack@3.8.1, then the current (at the time) head of the next branch and compare their timings.
  • webpack@3.8.1 - 542 seconds
  • b7c746d - 559 seconds

Unique Server Setup Causing Redundant Work?

We use some custom express middlware to trim our webpack configs, then progressively add to our compilers as bundles are requested. Sean pointed out that we might be interrupting our initial builds and rebuilding modules unnecessarily.

  • I modified Parser.js to hash contents of files it's called against, and keep track of the number of times it's called against a given hash, then added a logging function to count the number of hashes that have been built any number of times.
  • Building our core flow at server boot shows this...
{
  '1': 776,
  '2': 3486,
  '3': 632,
  '4': 255,
  ...
}
  • Requesting an unbuilt bundle then logging shows this...
{
  '2': 778,
  '3': 7,
  '4': 3481,
  '5': 2,
  '6': 631,
  '7': 1,
  '8': 255,
  ...
}

After digging in a bit more, I found that webpack uses filetimestamps vs buildtimestamps to determine whether it needs to rebuild something. This works great when the timestamps are available, but causes issues in our case since timestamps are applied by the watcher and only when a file changes (changing a single file adds timestamp info for all files known by webpack).

Use Asap?

We didn't talk about this in the meeting, but @lencioni mentioned that asap might have better perf/call stacks than use process.nextTick.

I replaced all the calls I could find, and didn't see a noticeable difference in build times or the appearance of the call stack. Also, time spent in (program) (afaik that means idle CPU) didn't decrease.

Remaining TODOs on my end:

Setup a small demo repo show casing the filetimestamp issue we're running into. This will show the community what our progressive server looks like, and also allow us to file a perf bug against webpack. Will update this comment with relevant links.

cc: @ljharb @sokra

@MagicDuck

This comment has been minimized.

Show comment
Hide comment
@MagicDuck

MagicDuck Nov 18, 2017

Contributor

On an older version of webpack here, so maybe it's not as much of an issue, but could still be an avenue of investigation. For our build:

$ time npm run build  
0.03s user 0.31s system 0% cpu 2:37.30 total
$ time WEBPACK_SKIP_SOURCEMAPS=true npm run build
0.08s user 0.24s system 0% cpu 1:36.33 total

Note: WEBPACK_SKIP_SOURCEMAPS=true is an env var we set up to turn off sourcemaps.
Could some sort of sourcemap caching help?...

Edit. Looks like this was caused by a loader we were using which was behaving a bit like the export loader, but applied on a larger scale.

Contributor

MagicDuck commented Nov 18, 2017

On an older version of webpack here, so maybe it's not as much of an issue, but could still be an avenue of investigation. For our build:

$ time npm run build  
0.03s user 0.31s system 0% cpu 2:37.30 total
$ time WEBPACK_SKIP_SOURCEMAPS=true npm run build
0.08s user 0.24s system 0% cpu 1:36.33 total

Note: WEBPACK_SKIP_SOURCEMAPS=true is an env var we set up to turn off sourcemaps.
Could some sort of sourcemap caching help?...

Edit. Looks like this was caused by a loader we were using which was behaving a bit like the export loader, but applied on a larger scale.

@filipesilva

This comment has been minimized.

Show comment
Hide comment
@filipesilva

filipesilva Nov 18, 2017

Contributor

@lencioni are you still having high memory usage when enabling ModuleConcatenationPlugin? I saw a similar report in angular/angular-cli#5618 (comment).

Contributor

filipesilva commented Nov 18, 2017

@lencioni are you still having high memory usage when enabling ModuleConcatenationPlugin? I saw a similar report in angular/angular-cli#5618 (comment).

@lencioni

This comment has been minimized.

Show comment
Hide comment
@lencioni

lencioni Nov 18, 2017

Contributor
Contributor

lencioni commented Nov 18, 2017

@sod

This comment has been minimized.

Show comment
Hide comment
@sod

sod Nov 20, 2017

Just throwing this for the sake of competition: have you tried FuseBox? Maybe webpack performance could profit by inheriting strategies used there.

Our build with webpack is at 30 seconds currently. FuseBox 15 seconds. And FuseBox restart 500 milliseconds.

My wild guess is FuseBox can resume from disk. So it is able to maintain webpacks in-memory watcher performance also at restart.

sod commented Nov 20, 2017

Just throwing this for the sake of competition: have you tried FuseBox? Maybe webpack performance could profit by inheriting strategies used there.

Our build with webpack is at 30 seconds currently. FuseBox 15 seconds. And FuseBox restart 500 milliseconds.

My wild guess is FuseBox can resume from disk. So it is able to maintain webpacks in-memory watcher performance also at restart.

@TheLarkInn

This comment has been minimized.

Show comment
Hide comment
@TheLarkInn

TheLarkInn Dec 26, 2017

Member

Also, @gdborton, if you are willing to do so, I have created a plugin based on webpack's new tapable API (it will be added to v4, but needs some visual cleaning up to do). If you are willing to pull alpha.2 or webpack@webpack/webpack#next, you can add this plugin:

https://gist.github.com/TheLarkInn/38d403dfce002e26472fb1b630f3cb8d

This should allow us to get even a more indepth (at the plugin level) what is the slowest running piece "per plugin". Another tool we can use to boil things down.

Member

TheLarkInn commented Dec 26, 2017

Also, @gdborton, if you are willing to do so, I have created a plugin based on webpack's new tapable API (it will be added to v4, but needs some visual cleaning up to do). If you are willing to pull alpha.2 or webpack@webpack/webpack#next, you can add this plugin:

https://gist.github.com/TheLarkInn/38d403dfce002e26472fb1b630f3cb8d

This should allow us to get even a more indepth (at the plugin level) what is the slowest running piece "per plugin". Another tool we can use to boil things down.

@filipesilva

This comment has been minimized.

Show comment
Hide comment
@filipesilva

filipesilva Jan 8, 2018

Contributor

@TheLarkInn what's the plugin name going to be when it makes it into webpack 4?

Contributor

filipesilva commented Jan 8, 2018

@TheLarkInn what's the plugin name going to be when it makes it into webpack 4?

@TheLarkInn

This comment has been minimized.

Show comment
Hide comment
@TheLarkInn

TheLarkInn Jan 10, 2018

Member
Member

TheLarkInn commented Jan 10, 2018

@gdborton

This comment has been minimized.

Show comment
Hide comment
@gdborton

gdborton Jan 18, 2018

Contributor

Attempted to run our profiling script against the newer alpha versions, but I hit some errors against pre.3 and pre.4

SyntaxError: Unexpected token m in JSON at position 0
    at JSON.parse (<anonymous>)
    at JsonParser.parse (/Users/gary_borton/airlab/repos/3-airbnb/node_modules/webpack/lib/JsonParser.js:15:21)
    at doBuild (/Users/gary_borton/airlab/repos/3-airbnb/node_modules/webpack/lib/NormalModule.js:329:32)
    at runLoaders (/Users/gary_borton/airlab/repos/3-airbnb/node_modules/webpack/lib/NormalModule.js:229:11)
    at /Users/gary_borton/airlab/repos/3-airbnb/node_modules/loader-runner/lib/LoaderRunner.js:370:3

Here are the numbers anyway, although I don't know how relevant they actually are.

Build setup looks like:
Compiler: client, entryPoints: 3, modules: 4923

node version webpack version iteration 0 iteration 1 iteration 2 average time (ms)
6.11.1 4.0.0-alpha.1 141134ms 26355ms 26115ms 64534.666666666664ms
6.11.1 4.0.0-alpha.2 139014ms 28278ms 27213ms 64835ms
6.11.1 4.0.0-alpha.3 271669ms 45744ms 42501ms 119971.33333333333ms
6.11.1 4.0.0-alpha.4 252604ms 42248ms 39134ms 111328.66666666667ms
8.9.2 4.0.0-alpha.1 77905ms 20122ms 20150ms 39392.33333333333ms
8.9.2 4.0.0-alpha.2 78336ms 20179ms 19982ms 39499ms
8.9.2 4.0.0-alpha.3 155312ms 33330ms 34598ms 74413.33333333333ms
8.9.2 4.0.0-alpha.4 156199ms 34105ms 34675ms 74993ms
Contributor

gdborton commented Jan 18, 2018

Attempted to run our profiling script against the newer alpha versions, but I hit some errors against pre.3 and pre.4

SyntaxError: Unexpected token m in JSON at position 0
    at JSON.parse (<anonymous>)
    at JsonParser.parse (/Users/gary_borton/airlab/repos/3-airbnb/node_modules/webpack/lib/JsonParser.js:15:21)
    at doBuild (/Users/gary_borton/airlab/repos/3-airbnb/node_modules/webpack/lib/NormalModule.js:329:32)
    at runLoaders (/Users/gary_borton/airlab/repos/3-airbnb/node_modules/webpack/lib/NormalModule.js:229:11)
    at /Users/gary_borton/airlab/repos/3-airbnb/node_modules/loader-runner/lib/LoaderRunner.js:370:3

Here are the numbers anyway, although I don't know how relevant they actually are.

Build setup looks like:
Compiler: client, entryPoints: 3, modules: 4923

node version webpack version iteration 0 iteration 1 iteration 2 average time (ms)
6.11.1 4.0.0-alpha.1 141134ms 26355ms 26115ms 64534.666666666664ms
6.11.1 4.0.0-alpha.2 139014ms 28278ms 27213ms 64835ms
6.11.1 4.0.0-alpha.3 271669ms 45744ms 42501ms 119971.33333333333ms
6.11.1 4.0.0-alpha.4 252604ms 42248ms 39134ms 111328.66666666667ms
8.9.2 4.0.0-alpha.1 77905ms 20122ms 20150ms 39392.33333333333ms
8.9.2 4.0.0-alpha.2 78336ms 20179ms 19982ms 39499ms
8.9.2 4.0.0-alpha.3 155312ms 33330ms 34598ms 74413.33333333333ms
8.9.2 4.0.0-alpha.4 156199ms 34105ms 34675ms 74993ms
@sokra

This comment has been minimized.

Show comment
Hide comment
@sokra

sokra Jan 18, 2018

Member

Did this mean we got slower between alpha.2 and alpha.3?

Member

sokra commented Jan 18, 2018

Did this mean we got slower between alpha.2 and alpha.3?

@sokra

This comment has been minimized.

Show comment
Hide comment
@sokra

sokra Jan 18, 2018

Member

Ah that's what you are saying about the error. They are probably not comparable, because of the error. Looks like you need to remove the json-loader from configuration to fix the error.

Member

sokra commented Jan 18, 2018

Ah that's what you are saying about the error. They are probably not comparable, because of the error. Looks like you need to remove the json-loader from configuration to fix the error.

@gdborton

This comment has been minimized.

Show comment
Hide comment
@gdborton

gdborton Feb 13, 2018

Contributor

So still trying to build out a profile with webpack 4. Disabled the json-loader, but running into v8 this error now:

FATAL ERROR: v8::ToLocalChecked Empty MaybeLocal.
 1: node::Abort() [node]
 2: 0x12190dc [node]
 3: v8::Utils::ReportApiFailure(char const*, char const*) [node]
 4: 0x12bb5a1 [node]
 5: v8_inspector::V8InspectorSessionImpl::sendProtocolResponse(int, std::unique_ptr<v8_inspector::protocol::Serializable, std::default_delete<v8_inspector::protocol::Serializable> >) [node]
 6: v8_inspector::protocol::DispatcherBase::sendResponse(int, v8_inspector::protocol::DispatchResponse const&, std::unique_ptr<v8_inspector::protocol::DictionaryValue, std::default_delete<v8_inspector::protocol::DictionaryValue> >) [node]
 7: v8_inspector::protocol::Profiler::DispatcherImpl::stop(int, std::unique_ptr<v8_inspector::protocol::DictionaryValue, std::default_delete<v8_inspector::protocol::DictionaryValue> >, v8_inspector::protocol::ErrorSupport*) [node]
 8: v8_inspector::protocol::Profiler::DispatcherImpl::dispatch(int, v8_inspector::String16 const&, std::unique_ptr<v8_inspector::protocol::DictionaryValue, std::default_delete<v8_inspector::protocol::DictionaryValue> >) [node]
 9: v8_inspector::protocol::UberDispatcher::dispatch(std::unique_ptr<v8_inspector::protocol::Value, std::default_delete<v8_inspector::protocol::Value> >) [node]
10: v8_inspector::V8InspectorSessionImpl::dispatchProtocolMessage(v8_inspector::StringView const&) [node]
11: 0x12bc387 [node]
12: v8::internal::FunctionCallbackArguments::Call(void (*)(v8::FunctionCallbackInfo<v8::Value> const&)) [node]
13: 0xb8fe3c [node]
14: v8::internal::Builtin_HandleApiCall(int, v8::internal::Object**, v8::internal::Isolate*) [node]
Contributor

gdborton commented Feb 13, 2018

So still trying to build out a profile with webpack 4. Disabled the json-loader, but running into v8 this error now:

FATAL ERROR: v8::ToLocalChecked Empty MaybeLocal.
 1: node::Abort() [node]
 2: 0x12190dc [node]
 3: v8::Utils::ReportApiFailure(char const*, char const*) [node]
 4: 0x12bb5a1 [node]
 5: v8_inspector::V8InspectorSessionImpl::sendProtocolResponse(int, std::unique_ptr<v8_inspector::protocol::Serializable, std::default_delete<v8_inspector::protocol::Serializable> >) [node]
 6: v8_inspector::protocol::DispatcherBase::sendResponse(int, v8_inspector::protocol::DispatchResponse const&, std::unique_ptr<v8_inspector::protocol::DictionaryValue, std::default_delete<v8_inspector::protocol::DictionaryValue> >) [node]
 7: v8_inspector::protocol::Profiler::DispatcherImpl::stop(int, std::unique_ptr<v8_inspector::protocol::DictionaryValue, std::default_delete<v8_inspector::protocol::DictionaryValue> >, v8_inspector::protocol::ErrorSupport*) [node]
 8: v8_inspector::protocol::Profiler::DispatcherImpl::dispatch(int, v8_inspector::String16 const&, std::unique_ptr<v8_inspector::protocol::DictionaryValue, std::default_delete<v8_inspector::protocol::DictionaryValue> >) [node]
 9: v8_inspector::protocol::UberDispatcher::dispatch(std::unique_ptr<v8_inspector::protocol::Value, std::default_delete<v8_inspector::protocol::Value> >) [node]
10: v8_inspector::V8InspectorSessionImpl::dispatchProtocolMessage(v8_inspector::StringView const&) [node]
11: 0x12bc387 [node]
12: v8::internal::FunctionCallbackArguments::Call(void (*)(v8::FunctionCallbackInfo<v8::Value> const&)) [node]
13: 0xb8fe3c [node]
14: v8::internal::Builtin_HandleApiCall(int, v8::internal::Object**, v8::internal::Isolate*) [node]
@TheLarkInn

This comment has been minimized.

Show comment
Hide comment
@TheLarkInn

TheLarkInn Feb 13, 2018

Member

@bmeurer is there something we can do for this?

Member

TheLarkInn commented Feb 13, 2018

@bmeurer is there something we can do for this?

@bmeurer

This comment has been minimized.

Show comment
Hide comment
@bmeurer

bmeurer Feb 14, 2018

Hm, looks like misuse of the inspector API. @hashseed or @schuay can you take a look?

bmeurer commented Feb 14, 2018

Hm, looks like misuse of the inspector API. @hashseed or @schuay can you take a look?

@schuay

This comment has been minimized.

Show comment
Hide comment
@schuay

schuay Feb 14, 2018

My guess is that this ToLocalChecked() fails: https://github.com/nodejs/node/blob/b58a1cd70a2ccc97bb97bba4464d2b5aaecf5ab5/src/inspector_js_api.cc#L57

If you have a debug build of node, could you confirm / check why it's failing?

schuay commented Feb 14, 2018

My guess is that this ToLocalChecked() fails: https://github.com/nodejs/node/blob/b58a1cd70a2ccc97bb97bba4464d2b5aaecf5ab5/src/inspector_js_api.cc#L57

If you have a debug build of node, could you confirm / check why it's failing?

@eugeneo

This comment has been minimized.

Show comment
Hide comment
@eugeneo

eugeneo Feb 14, 2018

@gdborton is there some way I could repro that locally? Feel free to open a bug against node - https://github.com/nodejs/node/issues and cc eugeneo.

eugeneo commented Feb 14, 2018

@gdborton is there some way I could repro that locally? Feel free to open a bug against node - https://github.com/nodejs/node/issues and cc eugeneo.

@gdborton

This comment has been minimized.

Show comment
Hide comment
@gdborton

gdborton Feb 15, 2018

Contributor

We found that it was related to a profile plugin that we had added. To get a stack trace, removing it allowed a build to go through.

Build setup looks like:
Compiler: client, entryPoints: 2, modules: 3150

node version webpack version iteration 0 iteration 1 iteration 2 average time (ms)
6.11.1 3.10.0 144125ms 29938ms 28288ms 67450ms
6.11.1 4.0.0-beta.1 312964ms 65834ms 64084ms 147627ms
8.9.2 3.10.0 81648ms 22559ms 22656ms 42287ms
8.9.2 4.0.0-beta.1 176611ms 42175ms 43291ms 87359ms

Looks a lot slower, but I definitely didn't expect that.

Contributor

gdborton commented Feb 15, 2018

We found that it was related to a profile plugin that we had added. To get a stack trace, removing it allowed a build to go through.

Build setup looks like:
Compiler: client, entryPoints: 2, modules: 3150

node version webpack version iteration 0 iteration 1 iteration 2 average time (ms)
6.11.1 3.10.0 144125ms 29938ms 28288ms 67450ms
6.11.1 4.0.0-beta.1 312964ms 65834ms 64084ms 147627ms
8.9.2 3.10.0 81648ms 22559ms 22656ms 42287ms
8.9.2 4.0.0-beta.1 176611ms 42175ms 43291ms 87359ms

Looks a lot slower, but I definitely didn't expect that.

@benneq

This comment has been minimized.

Show comment
Hide comment
@benneq

benneq Mar 7, 2018

@gdborton Do you have more results using webpack 4.0.0 final and 4.1.0?

Our app's compile time got about 20% slower since we upgraded from webpack 3.x to 4.0. 4.1 looks the same as 4.0. Watch mode start time got 20% slower, too.

We reuse our old webpack.config.js from 3.x, only removed UglifyJsPlugin. Resulting build size is the same as before, but as as said earlier: It got 20% slower.

Here's our config: (Maybe we made some mistake?)

{
	context: __dirname + "/app",
	entry: {
		app: './app.ts'
	},
	output: {
		path: targetPathApp,
		filename: './_js/[name].js'
	},
	resolve: {
		extensions: ['.ts', '.js', '.css', '.scss', '.html']
	},
	module: {
		rules: [
			{
				test: /\.ts$/,
				loader: 'thread-loader?workers='+(require('os').cpus().length - 2)+'!strip-loader?strip[]=debug,strip[]=console.debug,strip[]=console.log!ng-annotate-loader?ngAnnotate=ng-annotate-patched!ts-loader?happyPackMode=true'
			},
			{
				test: /\.js$/,
				loader: 'thread-loader?workers='+(require('os').cpus().length - 2)+'!strip-loader?strip[]=debug,strip[]=console.debug,strip[]=console.log!ng-annotate-loader?ngAnnotate=ng-annotate-patched'
			},
			{
				test: /\.html$/,
				loader: 'html-loader'
			},
			{
				test: /\.scss$/,
				exclude: /node_modules/,
				loader: ExtractTextPlugin.extract({
					use: 'thread-loader?workers='+(require('os').cpus().length - 2)+'!css-loader?-minimize!sass-loader',
					fallback: 'style-loader'
				})
			},
			{
				test: /\.png$/,
				loader: 'url-loader?limit=16384&mimetype=image/png'
			},
			{
				test: /\.svg(\?v=\d+\.\d+\.\d+)?$/,
				loader: 'url-loader?limit=16384&mimetype=image/svg+xml'
			},
			{
				test: /(fontawesome).*\.(eot|otf|svg|ttf|woff|woff2)(\?v=\d+\.\d+\.\d+)?$/i,
				loader: 'file-loader?name=./_fonts/font-awesome/[name].[ext]&publicPath=./../'
			}
		]
	},
	plugins: [
		new webpack.ProvidePlugin({
			"$": "jquery",
			"jQuery": "jquery",
			"window.jQuery": "jquery",
			"window.CodeMirror": "codemirror",
			"moment": "moment"
		}),
		new webpack.DefinePlugin({
			'ENV': JSON.stringify('production')
		}),
		new webpack.LoaderOptionsPlugin({
			minimize: true,
			debug: false
		}),
		new ExtractTextPlugin('./_css/app.css'),
		new ForkTsCheckerWebpackPlugin({
			checkSyntacticErrors: true,
			tsconfig: '../tsconfig.json'
		}),
		new HtmlWebpackPlugin({
			template: 'index.ftl.ejs',
			inject: false,
			filename: 'index.ftl',
			minify: {
				collapseWhitespace: true,
				removeComments: true,
				removeScriptTypeAttributes: true,
				removeStyleLinkTypeAttributes: true,
				minifyCSS: true
			}
		}),
		new CopyWebpackPlugin([
			{ from: '../node_modules/ckeditor', to: '_js/ckeditor' },
			{ from: '_img', to: '_img' },
			{ from: '../node_modules/timezones.json/timezones.json', to: '_json/timezones.json/timezones.json' },
			{ from: '_json/countrycodes/mledoze_countries_iso3166_custom.json', to: '_json/countrycodes/mledoze_countries_iso3166_custom.json' }
		])
	]
}

benneq commented Mar 7, 2018

@gdborton Do you have more results using webpack 4.0.0 final and 4.1.0?

Our app's compile time got about 20% slower since we upgraded from webpack 3.x to 4.0. 4.1 looks the same as 4.0. Watch mode start time got 20% slower, too.

We reuse our old webpack.config.js from 3.x, only removed UglifyJsPlugin. Resulting build size is the same as before, but as as said earlier: It got 20% slower.

Here's our config: (Maybe we made some mistake?)

{
	context: __dirname + "/app",
	entry: {
		app: './app.ts'
	},
	output: {
		path: targetPathApp,
		filename: './_js/[name].js'
	},
	resolve: {
		extensions: ['.ts', '.js', '.css', '.scss', '.html']
	},
	module: {
		rules: [
			{
				test: /\.ts$/,
				loader: 'thread-loader?workers='+(require('os').cpus().length - 2)+'!strip-loader?strip[]=debug,strip[]=console.debug,strip[]=console.log!ng-annotate-loader?ngAnnotate=ng-annotate-patched!ts-loader?happyPackMode=true'
			},
			{
				test: /\.js$/,
				loader: 'thread-loader?workers='+(require('os').cpus().length - 2)+'!strip-loader?strip[]=debug,strip[]=console.debug,strip[]=console.log!ng-annotate-loader?ngAnnotate=ng-annotate-patched'
			},
			{
				test: /\.html$/,
				loader: 'html-loader'
			},
			{
				test: /\.scss$/,
				exclude: /node_modules/,
				loader: ExtractTextPlugin.extract({
					use: 'thread-loader?workers='+(require('os').cpus().length - 2)+'!css-loader?-minimize!sass-loader',
					fallback: 'style-loader'
				})
			},
			{
				test: /\.png$/,
				loader: 'url-loader?limit=16384&mimetype=image/png'
			},
			{
				test: /\.svg(\?v=\d+\.\d+\.\d+)?$/,
				loader: 'url-loader?limit=16384&mimetype=image/svg+xml'
			},
			{
				test: /(fontawesome).*\.(eot|otf|svg|ttf|woff|woff2)(\?v=\d+\.\d+\.\d+)?$/i,
				loader: 'file-loader?name=./_fonts/font-awesome/[name].[ext]&publicPath=./../'
			}
		]
	},
	plugins: [
		new webpack.ProvidePlugin({
			"$": "jquery",
			"jQuery": "jquery",
			"window.jQuery": "jquery",
			"window.CodeMirror": "codemirror",
			"moment": "moment"
		}),
		new webpack.DefinePlugin({
			'ENV': JSON.stringify('production')
		}),
		new webpack.LoaderOptionsPlugin({
			minimize: true,
			debug: false
		}),
		new ExtractTextPlugin('./_css/app.css'),
		new ForkTsCheckerWebpackPlugin({
			checkSyntacticErrors: true,
			tsconfig: '../tsconfig.json'
		}),
		new HtmlWebpackPlugin({
			template: 'index.ftl.ejs',
			inject: false,
			filename: 'index.ftl',
			minify: {
				collapseWhitespace: true,
				removeComments: true,
				removeScriptTypeAttributes: true,
				removeStyleLinkTypeAttributes: true,
				minifyCSS: true
			}
		}),
		new CopyWebpackPlugin([
			{ from: '../node_modules/ckeditor', to: '_js/ckeditor' },
			{ from: '_img', to: '_img' },
			{ from: '../node_modules/timezones.json/timezones.json', to: '_json/timezones.json/timezones.json' },
			{ from: '_json/countrycodes/mledoze_countries_iso3166_custom.json', to: '_json/countrycodes/mledoze_countries_iso3166_custom.json' }
		])
	]
}
@montogeek

This comment has been minimized.

Show comment
Hide comment
@montogeek

montogeek Mar 7, 2018

Member

@benneq Have you tried using webpack 2 Rules syntax?

Member

montogeek commented Mar 7, 2018

@benneq Have you tried using webpack 2 Rules syntax?

@benneq

This comment has been minimized.

Show comment
Hide comment
@benneq

benneq Mar 7, 2018

What syntax you mean?

benneq commented Mar 7, 2018

What syntax you mean?

@montogeek

This comment has been minimized.

Show comment
Hide comment
@sod

This comment has been minimized.

Show comment
Hide comment
@sod

sod Mar 7, 2018

@benneq are you sure all those "thread-loader" things don't do more harm then good? The first post of this thread states that it made their build 60% slower. Should just try removing that.
And if you keep using it maybe incorporate the warning with it's interaction with the sass-loader as mentioned in the webpack doc.

node-sass has a bug which blocks threads from the Node.js threadpool. When using it with the thread-loader set workerParallelJobs: 2.

And maybe remove all those minify configurations and use webpack --mode=production and webpack --mode=development --watch instead to boost development performance.

https://medium.com/webpack/webpack-4-mode-and-optimization-5423a6bc597a

sod commented Mar 7, 2018

@benneq are you sure all those "thread-loader" things don't do more harm then good? The first post of this thread states that it made their build 60% slower. Should just try removing that.
And if you keep using it maybe incorporate the warning with it's interaction with the sass-loader as mentioned in the webpack doc.

node-sass has a bug which blocks threads from the Node.js threadpool. When using it with the thread-loader set workerParallelJobs: 2.

And maybe remove all those minify configurations and use webpack --mode=production and webpack --mode=development --watch instead to boost development performance.

https://medium.com/webpack/webpack-4-mode-and-optimization-5423a6bc597a

@benneq

This comment has been minimized.

Show comment
Hide comment
@benneq

benneq Mar 7, 2018

@montogeek I'll check that soon.

@sod Yes, they improve our build times a lot.
In Webpack 3 our build times (for the whole project. Not only the webpack Build) went from 10 minutes to 7 minutes. Just by adding a few thread-loaders to the webpack config.
In Webpack 4 it's 14 minutes (without thread-loader) vs. 10 minutes.

As I said, we just left our webpack 3 config untouched and use it with webpack 4. Only removed the Uglify plugin.

Our start command for the config file I posted above is this:

webpack --mode production --config webpack.config.prod.js

I'm not really concerned about the watch mode. It went from 15 to 18 seconds startup time. That's acceptable. But 3 additional minutes on the build server are annoying.

I'm not quite sure which plugins and options I can remove from the config, because I don't know if all plugins already use the new development and production flags. Though I just kept everything as it was with webpack 3.

benneq commented Mar 7, 2018

@montogeek I'll check that soon.

@sod Yes, they improve our build times a lot.
In Webpack 3 our build times (for the whole project. Not only the webpack Build) went from 10 minutes to 7 minutes. Just by adding a few thread-loaders to the webpack config.
In Webpack 4 it's 14 minutes (without thread-loader) vs. 10 minutes.

As I said, we just left our webpack 3 config untouched and use it with webpack 4. Only removed the Uglify plugin.

Our start command for the config file I posted above is this:

webpack --mode production --config webpack.config.prod.js

I'm not really concerned about the watch mode. It went from 15 to 18 seconds startup time. That's acceptable. But 3 additional minutes on the build server are annoying.

I'm not quite sure which plugins and options I can remove from the config, because I don't know if all plugins already use the new development and production flags. Though I just kept everything as it was with webpack 3.

@sokra

This comment has been minimized.

Show comment
Hide comment
@sokra

sokra Mar 9, 2018

Member

Could you try without the ExtractTextWebpackPlugin?

Member

sokra commented Mar 9, 2018

Could you try without the ExtractTextWebpackPlugin?

@benneq

This comment has been minimized.

Show comment
Hide comment
@benneq

benneq Mar 9, 2018

@montogeek : changing the syntax didn't change build times at all.

@sod : I removed webpack.LoaderOptionsPlugin. No improvement to build time. But bundle size increased by 5%. Though I'll keep the plugin (only in production build of course)

@sokra : We use ExtractTextWebpackPlugin v4.0.0-beta.0
I now just removed the lines concerning ExtractTextWebpackPlugin, but build time stays the same.
Is there some built in way in webpack 4 to generate external css file?

benneq commented Mar 9, 2018

@montogeek : changing the syntax didn't change build times at all.

@sod : I removed webpack.LoaderOptionsPlugin. No improvement to build time. But bundle size increased by 5%. Though I'll keep the plugin (only in production build of course)

@sokra : We use ExtractTextWebpackPlugin v4.0.0-beta.0
I now just removed the lines concerning ExtractTextWebpackPlugin, but build time stays the same.
Is there some built in way in webpack 4 to generate external css file?

@edmorley

This comment has been minimized.

Show comment
Hide comment
@edmorley

edmorley Apr 20, 2018

Contributor

Is there some built in way in webpack 4 to generate external css file?

The recommendation with webpack 4 is to switch from extract-text-webpack-plugin to mini-css-extract-plugin (since the former has wontfixed official webpack 4 support, and the latter is more lightweight anyway).

Contributor

edmorley commented Apr 20, 2018

Is there some built in way in webpack 4 to generate external css file?

The recommendation with webpack 4 is to switch from extract-text-webpack-plugin to mini-css-extract-plugin (since the former has wontfixed official webpack 4 support, and the latter is more lightweight anyway).

@dtothefp

This comment has been minimized.

Show comment
Hide comment
@dtothefp

dtothefp Apr 20, 2018

Seems like this issue has gone quiet since the release of Wepack 4 and that others are reporting performance problems when upgrading from v3 to v4 #6767. I upgraded a quite large project from v2 to v4 and am considering downgrading to v3 as our rebuild times are in the range of 30s to 2mins. I'm using mini-css-extract-plugin and DLL Plugin along with HappyPack. Does anyone have any updates on Webpack v4 performance?

dtothefp commented Apr 20, 2018

Seems like this issue has gone quiet since the release of Wepack 4 and that others are reporting performance problems when upgrading from v3 to v4 #6767. I upgraded a quite large project from v2 to v4 and am considering downgrading to v3 as our rebuild times are in the range of 30s to 2mins. I'm using mini-css-extract-plugin and DLL Plugin along with HappyPack. Does anyone have any updates on Webpack v4 performance?

@evilebottnawi

This comment has been minimized.

Show comment
Hide comment
@evilebottnawi

evilebottnawi Apr 21, 2018

Member

@dtothefp can you provide link on your repo or create minimum reproducible test repo?

Member

evilebottnawi commented Apr 21, 2018

@dtothefp can you provide link on your repo or create minimum reproducible test repo?

@buoyantair

This comment has been minimized.

Show comment
Hide comment
@buoyantair

buoyantair Apr 27, 2018

Contributor

@dtothefp Preferably in a different issue I guess :D

Contributor

buoyantair commented Apr 27, 2018

@dtothefp Preferably in a different issue I guess :D

@Bnaya

This comment has been minimized.

Show comment
Hide comment
@Bnaya

Bnaya Apr 27, 2018

@dtothefp for dev, i'm swapping mini-css-extract-plugin with plain style-loader.
So i get HMR and much faster rebuilds.
Also, i'm on typescript project, and i set my ts-loader to transplieOnly, and i'm running tsc separately,
And my rebuilds are very fast (1s~)

Bnaya commented Apr 27, 2018

@dtothefp for dev, i'm swapping mini-css-extract-plugin with plain style-loader.
So i get HMR and much faster rebuilds.
Also, i'm on typescript project, and i set my ts-loader to transplieOnly, and i'm running tsc separately,
And my rebuilds are very fast (1s~)

@abhinavsingi

This comment has been minimized.

Show comment
Hide comment
@abhinavsingi

abhinavsingi May 29, 2018

node version 8.9.4 should be used as webpack uses some optimisations done in that particular versions for build times

abhinavsingi commented May 29, 2018

node version 8.9.4 should be used as webpack uses some optimisations done in that particular versions for build times

@benneq

This comment has been minimized.

Show comment
Hide comment
@benneq

benneq May 29, 2018

What's the equivalent 10.x version where these optimizations are included?

benneq commented May 29, 2018

What's the equivalent 10.x version where these optimizations are included?

@sokra

This comment has been minimized.

Show comment
Hide comment
@sokra

sokra Jun 7, 2018

Member

Seems like this issue has gone quiet since the release of Wepack 4 and that others are reporting performance problems when upgrading from v3 to v4

Sorry we can't watch every discussion in older issues. I better close this one so people will open new issues.

If you still have issue with rebuild performance here is a guide how to get it fixed:

  • Rebuilding in production mode is slow. It doesn't use caching. Check if you are using development mode.
  • Reduce your configuration to remove all 3rd-party plugins. Is it still slow?
  • Don't use a minimizer plugin.
  • Try with latest node.js version. And latest webpack version.
  • Delete lockfile and node_modules and reinstall.
  • Open a new issue.
  • If possible create a minimal repro repo which has the same effect.
  • If possible capture a cpu profile of the rebuild process and attach it to the issue.
Member

sokra commented Jun 7, 2018

Seems like this issue has gone quiet since the release of Wepack 4 and that others are reporting performance problems when upgrading from v3 to v4

Sorry we can't watch every discussion in older issues. I better close this one so people will open new issues.

If you still have issue with rebuild performance here is a guide how to get it fixed:

  • Rebuilding in production mode is slow. It doesn't use caching. Check if you are using development mode.
  • Reduce your configuration to remove all 3rd-party plugins. Is it still slow?
  • Don't use a minimizer plugin.
  • Try with latest node.js version. And latest webpack version.
  • Delete lockfile and node_modules and reinstall.
  • Open a new issue.
  • If possible create a minimal repro repo which has the same effect.
  • If possible capture a cpu profile of the rebuild process and attach it to the issue.

@sokra sokra closed this Jun 7, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment