Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance problems #24

Closed
jhnns opened this issue Sep 17, 2012 · 22 comments
Closed

Performance problems #24

jhnns opened this issue Sep 17, 2012 · 22 comments

Comments

@jhnns
Copy link
Member

jhnns commented Sep 17, 2012

I think there are some serious performance problems with webpack. Currently we're generating first bundles with alamid. The basic bundle (without application logic) contains about 50-60 modules. This is fine because we like modularity and prefer writing many short modules than a single long one.

Right now webpack takes about 2 secs (!) to compile the file. This is way too long. I don't know if you have any performance tests yet, but I think it is a good idea to spend some time on improving it. Maybe you can use flame graphs for it.

We (at alamid) surely haven't done everything right and could reduce some file system calls. But I think the main problem lies in webpack.

One performance gain could be achieved by replacing all asynchronous file system tasks to synchronous ones. Since you're not bundling for every HTTP-request in production it's a good idea to make it synchronous.

@sokra
Copy link
Member

sokra commented Sep 18, 2012

I would like to reproduce your build. I tried alamid/test/exampleApp but there seem to be no implementation in some services. (which only gives a weird error msg, see below)

The bundling is a complex process, but your case should be below a second.

  • Don't use slow/sync loader/preprocessors. You can check if the compilation is cpu bound.
  • Check that you don't use production-only features in development, they are slow: i. e. minimizing.
  • Use the webpack cache. (!!! This gives the most speed)

I'm trying to optimize preformance. So there are some features planed:

  • Multi process compilation, because compilation is mostly cpu bound.
  • Store the cache not only in memory but also in filesystem to reuse it.
  • preformance profiling for loader etc.
  • If you have more hint, your welcome

You may send me your flame graph, that would be cool :)


The most time consuming part of webpack compiling are:

  • path resolution, because there are many paths which need to be tested.
  • javascript parsing esprima
  • loaders, i. e. using coffeescript/css/less takes twice the time

error for alamid/test/exampleApp, because of missing exports from services.

C:\Users\Sokrates\Eigene Repos\node_modules\alamid\lib\core\helpers\instantiateClasses.js:12
        classes[identifier] = new Class();
                              ^
TypeError: object is not a function
    at C:\Users\Sokrates\Eigene Repos\node_modules\alamid\lib\core\helpers\instantiateClasses.js:12:31
    at Function._.each._.forEach (C:\Users\Sokrates\Eigene Repos\node_modules\alamid\node_modules\underscore\underscore.js:84:24)
    at instantiateClasses (C:\Users\Sokrates\Eigene Repos\node_modules\alamid\lib\core\helpers\instantiateClasses.js:11:7)
    at populateServiceRegistry (C:\Users\Sokrates\Eigene Repos\node_modules\alamid\lib\server\bootstrap.server.js:31:32)
    at Object.bootstrap [as startServer] (C:\Users\Sokrates\Eigene Repos\node_modules\alamid\lib\server\bootstrap.server.js:87:29)
    at Object.<anonymous> (C:\Users\Sokrates\Eigene Repos\node_modules\alamid\test\exampleApp\app\init.server.js:6:8)
    at Module._compile (module.js:449:26)
    at Object.Module._extensions..js (module.js:467:10)
    at Module.load (module.js:356:32)
    at Function.Module._load (module.js:312:12)

@jhnns
Copy link
Member Author

jhnns commented Sep 18, 2012

Thanks for investigating. I will try to provide you with a flame graph so we can sort it out if we're responsible for this mess 😉. Also I will try the caching feature to improve loader performance.

You could take a look at fs.realpath. I think it's better than fs.exists because you can provide a caching object which reduces the number of fs.stat-calls. This caching object should be empty for every new bundler-call of course.

@jhnns
Copy link
Member Author

jhnns commented Sep 18, 2012

Ah btw: I'm not sure what the current development state of alamid/core is but in order to run the bundle tests you can just try node node_modules/mocha/bin/_mocha test/core/bundle/createBundle.test.js in the root-folder. This worked for me with the current revision.

@jhnns
Copy link
Member Author

jhnns commented Sep 18, 2012

Why do you always use this.cacheable && this.cacheable(); with condition?

@sokra
Copy link
Member

sokra commented Sep 18, 2012

I thought that loaders may be used in another module system which do not support cacheable as I specified it here as optional. If you only use webpack, it can be omitted ;)

@sokra
Copy link
Member

sokra commented Sep 19, 2012

your nodeclass loader is really slow. removing it gives 30% speedup :-|

It require the file, cross-compiles the file and vm.runInNewContext the result. This are 3 compilations... Why is this so complex? What makes nodeclass so special?

@jhnns
Copy link
Member Author

jhnns commented Sep 19, 2012

In my recent profiling tests nodeclass was inconspicuous but I'll take a look on it :). Currently it seems to increase packaging time between 200-300ms.

nodeclass is stuck between two development stages. I've built it a year ago and started to refactor it for 6 months. Unfortunately I haven't found the time to refactor it properly so currently there are different concepts mixed together.

@sokra
Copy link
Member

sokra commented Sep 19, 2012

All modules are cachable so using watch mode should be very fast at the second compilation. You should integrate some kind to dev-server into alamid, which recompiles on change (i. e. use webpack-dev-middleware).

You compilation is cpu bound so there are no big issues which blocking stuff. Multi process compilation should give a big speedup (next version).

@jhnns
Copy link
Member Author

jhnns commented Sep 20, 2012

Is caching also applied when using webpack without watch mode (with the node process still running)?

@sokra
Copy link
Member

sokra commented Sep 20, 2012

Not by default. But you can pass a options.cache. It should be a new instance of webpack/lib/Cache. See here.

@sokra
Copy link
Member

sokra commented Sep 20, 2012

There is info in the stats if a module was laoded from cache (fromCache) or stored to cache (toCache). If there is no info but a cache was used the module was not cachable.

@sokra
Copy link
Member

sokra commented Sep 25, 2012

Made performance improvements in 0.6.x and there is webpack@0.7.0-beta7. Also multi process building is availible, but only recommended for bigger projects. (there is a process fork overhead of ~100-200ms)

Changelog

Also check --profile or options.profile=true for finding proformance problems in loaders. Try the print the output with:

var formatOutput = require("webpack/lib/formatOutput");
console.log(formatOutput(stats, {
  colors: true,
  context: options.context
});

or use the command line

@jhnns
Copy link
Member Author

jhnns commented Sep 26, 2012

Nice! Looks good, I'll try it 😄

@jhnns
Copy link
Member Author

jhnns commented Sep 26, 2012

It seems like the raw-loader is somehow influencing the resolving time. Don't know what's going on but if a module is yellow or red the raw-loader is involved in 80%.

/home/roomie/node/alamid/~/raw-loader!./app/pages/MainPage.html [1220ms: 1167ms resolving, 0ms waiting, 52ms build, 1ms children]
(webpack)/~/bundle-loader/lazy.js!/home/roomie/node/alamid/~/raw-loader!./app/pages/home/HomePage.html [1496ms: 1327ms resolving, 0ms waiting, 50ms build, 119ms children]
/home/roomie/node/alamid/~/raw-loader!./app/pages/blog/BlogPage.html [979ms: 925ms resolving, 0ms waiting, 53ms build, 1ms children]

I'm using a postprocessor (which could influence the resolving time), but when I'm profiling it it doesn't take more than 1ms.

@sokra
Copy link
Member

sokra commented Sep 26, 2012

Webpack has to check many module folders to find it:

  • ./app/pages/node_modules
  • ./app/pages/web_modules
  • ./app/node_modules
  • ...
  • ...alamid/node_modules

(see options.resolve.modulesDirectorys)

Each time it has to check raw-webpack-web-loader, raw-webpack-loader, raw-web-loader, raw-loader and raw. (see options.resolve.loaderPostfixes)

Than it has to check and read the package.json.

Than it has to check index.webpack-loader.js, index.webpack-web-loader.js, index.web-loader.js, index and index.js. (see options.resolve.loaderExtensions)

That's a bit slow... (TODO optimize it)

You can try to add ...node/alamid/node_modules to options.paths, maybe that makes it a bit faster.

@sokra
Copy link
Member

sokra commented Sep 26, 2012

You can also try to reduce to lookup dirs/extensions/postfixes to that you want to use.

That are the defaults (see README)

options.resolve.extensions = ["", ".webpack.js", ".web.js", ".js"];
options.resolve.postfixes = ["", "-webpack", "-web"];
options.resolve.loaderExtensions = [".webpack-web-loader.js", ".webpack-loader.js", ".web-loader.js", ".loader.js", "", ".js"];
options.resolve.loaderPostfixes = ["-webpack-web-loader", "-webpack-loader", "-web-loader", "-loader", ""];
options.resolve.modulesDirectorys = ["web_modules", "jam", "node_modules"];

@sokra
Copy link
Member

sokra commented Oct 16, 2012

Performance checklist: (ordered by priority)

  • use caching
    • use watch mode, or cache object
    • flag loaders cachable
  • optimize resolving
    • minimize resolve checks in your settings
  • use worker processes
  • file an issue with new ideas how to optimize performance
  • cry that your hyper-complex project takes multiple seconds to compile

I'll close that issue for now...

@sokra sokra closed this as completed Oct 16, 2012
@slyngbaek
Copy link

@sokra I'm trying to get the cache working without using watch mode. All of my loaders are cacheable.

Passing in an empty object in options.cache does not seems to actually cache. The stats.toCache and stats.fromCache flags are always undefined.

Is the CachePlugin supposed to be used in conjunction with this?

@jhnns
Copy link
Member Author

jhnns commented Mar 10, 2015

Yes, I think so. A lot has changed since then.

Reading from the source code you could try this:

// webpack.config.js
var CachePlugin = require("webpack/lib/CachePlugin")
var myCache = {};

module.exports = {
    ...
    plugins: [
        new CachePlugin(myCache);
    ]
};

However, it is still important that the node.js process keeps running. You need to keep a reference on the myCache object.

@slyngbaek
Copy link

It looks like all I need to provide in the config is an empty object in options.cache. The WebpackOptionsApply module seems to create the cache plugin for us:

    if(options.cache === undefined ? options.watch : options.cache) {
        var CachePlugin = require("./CachePlugin");
        compiler.apply(new CachePlugin(typeof options.cache === "object" ? options.cache : null));
    }

I tested both adding the CachePlugin instance to the list of plugins as well as just passing in an empty object (and both) but the stats.toCache and stats.fromCache are always undefined. Subsequent builds take the same duration even though the cache object gets populated.

On a side note, it looks like the Webpack module exports the CachePlugin.

@slyngbaek
Copy link

It turns out I was using gulp-webpack which creates a new compiler instance every time thus ignoring whatever was put in the cache. Solved by simply using webpack and using the same compiler every run.

Thanks for the help!

@trusktr
Copy link

trusktr commented Aug 29, 2015

A new compiler instance every time should be just fine according to the docs as long as you store the cache object somewhere where it remains alive between compiler instantiations.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants