New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error in buildTypeScript: A project cannot be used in two compilations at the same time #293

Closed
hanssens opened this Issue Aug 26, 2016 · 34 comments

Comments

Projects
None yet
@hanssens

hanssens commented Aug 26, 2016

I'm submitting a bug report

  • Library Version:
    0.18.0 (CLI)

Please tell us about your environment:

  • Operating System:
    OSX 10.11.6
  • Node Version:
    6.1.0
  • NPM Version:
    3.8.6
  • Browser:
    all
  • Language:
    TypeScript 1.8.10

Current behavior:
Frequently, when running au run --watch the TypeScript compile seems to hang.

In the Activity Monitor I can see that the aurelia process suddenly keeps using lots of CPU:

activity_monitor__all_processes_

Also, the Terminal window seems to hang. Meaning I'm unable to kill the process with Ctrl+C, where it leads to a long stacktrace like:

File Changed: src/views/devices/device-overview.ts
Starting 'readProjectConfiguration'...
Starting 'readProjectConfiguration'...
Finished 'readProjectConfiguration'
Finished 'readProjectConfiguration'
Starting 'processMarkup'...
Starting 'processCSS'...
Starting 'processMarkup'...
Starting 'processCSS'...
Starting 'configureEnvironment'...
Starting 'configureEnvironment'...
Finished 'configureEnvironment'
Starting 'buildTypeScript'...
Finished 'configureEnvironment'
Starting 'buildTypeScript'...
{ uid: 292,
  name: 'buildTypeScript',
  branch: false,
  error:
   { Error: gulp-typescript: A project cannot be used in two compilations at the same time. Create multiple projects with createProject instead.
       at compile (/Users/Hanssens/Work/git/xxx/app/node_modules/gulp-typescript/release/main.js:72:19)
       at buildTypeScript (/Users/Hanssens/Work/git/xxx/app/aurelia_project/tasks/transpile.ts:32:15)
       at bound (domain.js:280:14)
       at runBound (domain.js:293:12)
       at asyncRunner (/Users/Hanssens/Work/git/xxx/app/node_modules/async-done/index.js:36:18)
       at _combinedTickCallback (internal/process/next_tick.js:67:7)
       at process._tickDomainCallback (internal/process/next_tick.js:122:9)
     domain:
      Domain {
        domain: null,
        _events: {},
        _eventsCount: 0,
        _maxListeners: undefined,
        members: [] },
     domainThrown: true },
  duration: [ 0, 1050860 ],
  time: 1472216741812 }

In order to resolve this, I need to manually force quit the aurelia process in the Activity Monitor.

Expected/desired behavior:
Expected behaviour is that it would not crash. Steps that can reproduce it:

  1. Start Visual Studio Code
  2. Add several .ts files
  3. Frequently edit and save the .ts files at random

In two projects I can reproduce this about every 10~15 minutes or so. Depending on how many changes I make, of course.

  • What is the expected behavior?
    Expected behaviour is NOT a crash.
  • What is the motivation / use case for changing the behavior?
    Fixing the crash.

@EisenbergEffect EisenbergEffect added the bug label Oct 3, 2016

@EisenbergEffect

This comment has been minimized.

Show comment
Hide comment
@EisenbergEffect

EisenbergEffect Oct 3, 2016

Member

@hanssens Can you track this to any particular series of events. The error message itself is odd. I'm wondering if it happens, for example, during rapid edit/saves of multiple files. The error message makes it sound like the build process is stepping on itself somehow. Can you provide any more information?

Member

EisenbergEffect commented Oct 3, 2016

@hanssens Can you track this to any particular series of events. The error message itself is odd. I'm wondering if it happens, for example, during rapid edit/saves of multiple files. The error message makes it sound like the build process is stepping on itself somehow. Can you provide any more information?

@pettys

This comment has been minimized.

Show comment
Hide comment
@pettys

pettys Oct 3, 2016

Just an anecdotal note: I experience the error "A project cannot be used in two compilations at the same time..." frequently. I run au run --watch from the windows command line, and the command exits (& stops watching, of course) when the error occurs.

I have a muscle reflex to hit the "Save All" key binding all the time, and when I do that multiple times within 5-10 seconds, it frequently crashes with this error. It's gotten so that, after I do several saves in a row, I automatically think, "Oh crap, I bet I just crashed my build process."

It feels like it's just a little guard needed somewhere in the build script, "If a build is in process, wait before starting another one." (Just a feeling, I'm pretty ignorant on the au build process.)

pettys commented Oct 3, 2016

Just an anecdotal note: I experience the error "A project cannot be used in two compilations at the same time..." frequently. I run au run --watch from the windows command line, and the command exits (& stops watching, of course) when the error occurs.

I have a muscle reflex to hit the "Save All" key binding all the time, and when I do that multiple times within 5-10 seconds, it frequently crashes with this error. It's gotten so that, after I do several saves in a row, I automatically think, "Oh crap, I bet I just crashed my build process."

It feels like it's just a little guard needed somewhere in the build script, "If a build is in process, wait before starting another one." (Just a feeling, I'm pretty ignorant on the au build process.)

@EisenbergEffect

This comment has been minimized.

Show comment
Hide comment
@EisenbergEffect

EisenbergEffect Oct 3, 2016

Member

That's great info. Thanks!

Member

EisenbergEffect commented Oct 3, 2016

That's great info. Thanks!

@hanssens

This comment has been minimized.

Show comment
Hide comment
@hanssens

hanssens Oct 6, 2016

As @pettys mentions, the situation is the same for me: its a hard habit of using "Save All" all the time.

As of what causes this, my hunch is the same as well. The main error produced is "gulp-typescript: A project cannot be used in two compilations at the same time. ", which is thrown by these lines of code in the gulp-typescript project. As it bubbles up it's most likely the cause on why the aurelia process hangs.

However, as on how to fix this properly the-Aurelia-way, I'm of course not really sure. I would assume either using a custom gulp-typescript that handles the check of an existing tsc build differently OR handle and consume this specific error in the CLI somehow.

Would love to help on this issue, if I can get an idea on the "correct" approach?

hanssens commented Oct 6, 2016

As @pettys mentions, the situation is the same for me: its a hard habit of using "Save All" all the time.

As of what causes this, my hunch is the same as well. The main error produced is "gulp-typescript: A project cannot be used in two compilations at the same time. ", which is thrown by these lines of code in the gulp-typescript project. As it bubbles up it's most likely the cause on why the aurelia process hangs.

However, as on how to fix this properly the-Aurelia-way, I'm of course not really sure. I would assume either using a custom gulp-typescript that handles the check of an existing tsc build differently OR handle and consume this specific error in the CLI somehow.

Would love to help on this issue, if I can get an idea on the "correct" approach?

@hanssens

This comment has been minimized.

Show comment
Hide comment
@hanssens

hanssens Oct 28, 2016

This still occurs in v0.21.0 of the CLI.

Steps which can reproduce it most of the time, using Visual Studio Code:

  1. Create a new project with the cli: au new
  2. [optional] Add some additional files to it
  3. Run the watch: au run --watch
  4. Make an arbitrary edit in at least two different files and then save these all at once ("Save All" in VS Code, for example)

Big chance that this will trigger the mentioned error and the "crash" of the CLI. Which means, I have to execute au run --watch again to restart the application.

hanssens commented Oct 28, 2016

This still occurs in v0.21.0 of the CLI.

Steps which can reproduce it most of the time, using Visual Studio Code:

  1. Create a new project with the cli: au new
  2. [optional] Add some additional files to it
  3. Run the watch: au run --watch
  4. Make an arbitrary edit in at least two different files and then save these all at once ("Save All" in VS Code, for example)

Big chance that this will trigger the mentioned error and the "crash" of the CLI. Which means, I have to execute au run --watch again to restart the application.

@bryandh

This comment has been minimized.

Show comment
Hide comment
@bryandh

bryandh Oct 28, 2016

Can confirm last comment from @hanssens. My CLI also crashes using the provided steps.
It really seems that quick subsequent saves of different files causes the gulp-typescript library to throw the error.
I'm currently running CLI version 0.20.2

bryandh commented Oct 28, 2016

Can confirm last comment from @hanssens. My CLI also crashes using the provided steps.
It really seems that quick subsequent saves of different files causes the gulp-typescript library to throw the error.
I'm currently running CLI version 0.20.2

@Thanood

This comment has been minimized.

Show comment
Hide comment
@Thanood

Thanood Oct 28, 2016

Contributor

This may probably related: ivogabe/gulp-typescript#322
And by another issue linked there I came to this PR in angular2-starter: antonybudianto/angular-starter#61

(note that this has nothing to do with angular2 but with their gulp-typescript setup)

Btw. I'm also getting this error from time to time.

Contributor

Thanood commented Oct 28, 2016

This may probably related: ivogabe/gulp-typescript#322
And by another issue linked there I came to this PR in angular2-starter: antonybudianto/angular-starter#61

(note that this has nothing to do with angular2 but with their gulp-typescript setup)

Btw. I'm also getting this error from time to time.

@TylerJPresley

This comment has been minimized.

Show comment
Hide comment
@TylerJPresley

TylerJPresley Dec 4, 2016

I'm getting this a LOT. Every few edits. Does everyone here have multiple bundles? Does this happen when you're editing files in 2 diff bundles and they get saved at the same time?

TylerJPresley commented Dec 4, 2016

I'm getting this a LOT. Every few edits. Does everyone here have multiple bundles? Does this happen when you're editing files in 2 diff bundles and they get saved at the same time?

@Thanood

This comment has been minimized.

Show comment
Hide comment
@Thanood

Thanood Dec 4, 2016

Contributor

Even two files in the same bundle. But this didn't happen to me since 0.23

Contributor

Thanood commented Dec 4, 2016

Even two files in the same bundle. But this didn't happen to me since 0.23

@TylerJPresley

This comment has been minimized.

Show comment
Hide comment
@TylerJPresley

TylerJPresley Dec 4, 2016

I get it more often with 0.23. What version of gulp-typescript and typescript are you using?

TylerJPresley commented Dec 4, 2016

I get it more often with 0.23. What version of gulp-typescript and typescript are you using?

@Thanood

This comment has been minimized.

Show comment
Hide comment
@Thanood

Thanood Dec 4, 2016

Contributor

I will have a look at work tomorrow. Don't have it here.

Contributor

Thanood commented Dec 4, 2016

I will have a look at work tomorrow. Don't have it here.

@Thanood

This comment has been minimized.

Show comment
Hide comment
@Thanood

Thanood Dec 5, 2016

Contributor

@TylerJPresley
gulp-typescript: 2.14.1
typescript: 2.1.1

Btw. I didn't want to say tht it won't happen anymore, just that I did not experience it anymore. 😃

Contributor

Thanood commented Dec 5, 2016

@TylerJPresley
gulp-typescript: 2.14.1
typescript: 2.1.1

Btw. I didn't want to say tht it won't happen anymore, just that I did not experience it anymore. 😃

@WillTartak

This comment has been minimized.

Show comment
Hide comment
@WillTartak

WillTartak Feb 25, 2017

I got here when I got this error for a completely different reason. I'm commenting in case it helps someone else.
For me it happened after I did a pull and subsequent build without doing an npm install first. My teammate had added a new package that I did not have and that caused the build to fail with this message. After doing the npm install the build completed successfully. hth

WillTartak commented Feb 25, 2017

I got here when I got this error for a completely different reason. I'm commenting in case it helps someone else.
For me it happened after I did a pull and subsequent build without doing an npm install first. My teammate had added a new package that I did not have and that caused the build to fail with this message. After doing the npm install the build completed successfully. hth

@gama410

This comment has been minimized.

Show comment
Hide comment
@gama410

gama410 Mar 8, 2017

Adding my 2 cents here.

I have this behavior repeatable easily. When I 'au generate element', I always get this error. Also, pretty much every time I save two files in a row, the error happens.
I suppose it's just because the watcher sees a file change and triggers the compilation while the first compilation is still running. I have no idea how this could be solved, though...

gama410 commented Mar 8, 2017

Adding my 2 cents here.

I have this behavior repeatable easily. When I 'au generate element', I always get this error. Also, pretty much every time I save two files in a row, the error happens.
I suppose it's just because the watcher sees a file change and triggers the compilation while the first compilation is still running. I have no idea how this could be solved, though...

@gama410

This comment has been minimized.

Show comment
Hide comment
@gama410

gama410 Mar 10, 2017

Hello!
any feedback on this yet?

As a workaround, I modified the buildTypescript method of the transpile task to create a new instance of the typescript compiler at every build (based on @Thanood's comment up there):
// if(!typescriptCompiler) { let typescriptCompiler = ts.createProject('tsconfig.json', { "typescript": require('typescript') }); // }

This feels a bit ugly but fixes the problem for me while waiting for a somewhat cleaner solution.

gama410 commented Mar 10, 2017

Hello!
any feedback on this yet?

As a workaround, I modified the buildTypescript method of the transpile task to create a new instance of the typescript compiler at every build (based on @Thanood's comment up there):
// if(!typescriptCompiler) { let typescriptCompiler = ts.createProject('tsconfig.json', { "typescript": require('typescript') }); // }

This feels a bit ugly but fixes the problem for me while waiting for a somewhat cleaner solution.

@Thanood

This comment has been minimized.

Show comment
Hide comment
@Thanood

Thanood Mar 10, 2017

Contributor

@gama410 Did you update aurelia-cli (or its build tasks)? The workaround should be in the default build scripts now.
If not, it should be safe to just remove the commented lines as this is actually the accepted solution. 😃
(it's here: https://github.com/aurelia/cli/blob/master/lib/resources/tasks/transpile.ts#L24-L26)

Contributor

Thanood commented Mar 10, 2017

@gama410 Did you update aurelia-cli (or its build tasks)? The workaround should be in the default build scripts now.
If not, it should be safe to just remove the commented lines as this is actually the accepted solution. 😃
(it's here: https://github.com/aurelia/cli/blob/master/lib/resources/tasks/transpile.ts#L24-L26)

@gama410

This comment has been minimized.

Show comment
Hide comment
@gama410

gama410 Mar 10, 2017

@Thanood Thanks! Indeed, the version of the tasks I use and the ones on the repo are different... Apparently, my quick-fix was good enough :)
I think this issue could be marked as fixed then.

gama410 commented Mar 10, 2017

@Thanood Thanks! Indeed, the version of the tasks I use and the ones on the repo are different... Apparently, my quick-fix was good enough :)
I think this issue could be marked as fixed then.

@MisterGoodcat

This comment has been minimized.

Show comment
Hide comment
@MisterGoodcat

MisterGoodcat Apr 5, 2017

That's not really a good solution, and it's a pity it was accepted into the official code base. Why? This disables incremental builds, and, even worse, triggers multiple successive builds when you save multiple dirty files at the same time ("save all"-features of editors). This then is actually slower than not using the watcher at all and simply performing a full build manually.

MisterGoodcat commented Apr 5, 2017

That's not really a good solution, and it's a pity it was accepted into the official code base. Why? This disables incremental builds, and, even worse, triggers multiple successive builds when you save multiple dirty files at the same time ("save all"-features of editors). This then is actually slower than not using the watcher at all and simply performing a full build manually.

@pettys

This comment has been minimized.

Show comment
Hide comment
@pettys

pettys Apr 5, 2017

@MisterGoodcat I appreciate your perspective, and your concerns are probably valid, but pragmatically speaking, this contribution has been a huge improvement in my day-to-day use of aurelia, and I'm glad it was accepted into the code base as quickly as it was. I'm sure once a more robust solution is contributed that addresses your concerns things will get even better, but between the choices of having this fix and reverting back to the way it was, this is far better.

(I haven't noticed serious speed issues on my medium-small sized project.)

pettys commented Apr 5, 2017

@MisterGoodcat I appreciate your perspective, and your concerns are probably valid, but pragmatically speaking, this contribution has been a huge improvement in my day-to-day use of aurelia, and I'm glad it was accepted into the code base as quickly as it was. I'm sure once a more robust solution is contributed that addresses your concerns things will get even better, but between the choices of having this fix and reverting back to the way it was, this is far better.

(I haven't noticed serious speed issues on my medium-small sized project.)

@MisterGoodcat

This comment has been minimized.

Show comment
Hide comment
@MisterGoodcat

MisterGoodcat Apr 5, 2017

@pettys You are of course right that the current behavior is more robust, as in you don't have to keep an eye on the watcher anymore, and restart it when it crashes (which I got pretty used to, actually). The problem however is, that for me there's almost no point in using the watcher anymore, because with anything than simple single file edits the workflow is now significantly slower than before. I just did a comparison, and only saving two files at the same time already resulted in the watcher being several seconds slower than simply performing a clean "au build" (the fact that the tracing performance has improved a lot recently also helps with this, of course). As for future fixes to this problem, I don't have high hopes. It's a long-standing issue. I even digged into the sources of multiple of the involved tools myself for several hours a while ago and nothing obvious did reveal itself - hopefully someone else has more patience to find a solution...

May I ask how your medium-small sized project translates to terms like file count etc.? Maybe it's my particular project setup, which has a couple of hundred source TypeScript and HTML template files.

MisterGoodcat commented Apr 5, 2017

@pettys You are of course right that the current behavior is more robust, as in you don't have to keep an eye on the watcher anymore, and restart it when it crashes (which I got pretty used to, actually). The problem however is, that for me there's almost no point in using the watcher anymore, because with anything than simple single file edits the workflow is now significantly slower than before. I just did a comparison, and only saving two files at the same time already resulted in the watcher being several seconds slower than simply performing a clean "au build" (the fact that the tracing performance has improved a lot recently also helps with this, of course). As for future fixes to this problem, I don't have high hopes. It's a long-standing issue. I even digged into the sources of multiple of the involved tools myself for several hours a while ago and nothing obvious did reveal itself - hopefully someone else has more patience to find a solution...

May I ask how your medium-small sized project translates to terms like file count etc.? Maybe it's my particular project setup, which has a couple of hundred source TypeScript and HTML template files.

@pettys

This comment has been minimized.

Show comment
Hide comment
@pettys

pettys Apr 5, 2017

@MisterGoodcat Ah, I see your point. I can see now that while this solution was a huge improvement for me, it wasn't universally. Bummer.

My src folder has 36 .ts files, 18 .html files, and 6 .less files -- sounds like 1-2 orders of magnitude less than your project.

I didn't notice the slow-down, and my incremental build is still quite a bit faster than "au build" (~10 sec for au build, ~2 sec for watched incremental build, au-cli v0.26.1).

I'm curious - for a project the size of yours, approximately how long does "au build" take?

pettys commented Apr 5, 2017

@MisterGoodcat Ah, I see your point. I can see now that while this solution was a huge improvement for me, it wasn't universally. Bummer.

My src folder has 36 .ts files, 18 .html files, and 6 .less files -- sounds like 1-2 orders of magnitude less than your project.

I didn't notice the slow-down, and my incremental build is still quite a bit faster than "au build" (~10 sec for au build, ~2 sec for watched incremental build, au-cli v0.26.1).

I'm curious - for a project the size of yours, approximately how long does "au build" take?

@Thanood

This comment has been minimized.

Show comment
Hide comment
@Thanood

Thanood Apr 5, 2017

Contributor

Maybe there's a way to "debounce" the transpilation..

Contributor

Thanood commented Apr 5, 2017

Maybe there's a way to "debounce" the transpilation..

@MisterGoodcat

This comment has been minimized.

Show comment
Hide comment
@MisterGoodcat

MisterGoodcat Apr 6, 2017

@pettys That project was split into more general features which at this point don't need to be touched very often anymore, spread across six separate projects with 300+ .ts and 40 .html/.less files (think common logic, generic components etc.). Technically, all but one of these take no real dependency on Aurelia and are pre-transpiled using a classic (gulp-based) setup.

The application itself is a true Aurelia project with 250+ .ts, 50+ .html and 100+ .less files. Only this part is build using the Aurelia CLI. On a powerful laptop (4 cores, 16 gigs, ssd) it takes 28 seconds for "au build". Using a running idle watcher and saving two files simultaneously takes, with the new logic, 34 seconds. Every additional changed file increases that time even more.

Similar to what @Thanood suggests, previous efforts to fix this ourselves included mostly things like trying to debounce, serialize or even drop successive build requests, all of which were not successful. The reason I started searching for the issue here again was that I'm now convinced the root cause for the original problem is the way gulp.watch is used, and I wanted to know if this has been discussed before. Only then did I realize there has been an accepted fix (we don't check/update the project template and tasks themselves regularly unless a change is advised in some change log).

MisterGoodcat commented Apr 6, 2017

@pettys That project was split into more general features which at this point don't need to be touched very often anymore, spread across six separate projects with 300+ .ts and 40 .html/.less files (think common logic, generic components etc.). Technically, all but one of these take no real dependency on Aurelia and are pre-transpiled using a classic (gulp-based) setup.

The application itself is a true Aurelia project with 250+ .ts, 50+ .html and 100+ .less files. Only this part is build using the Aurelia CLI. On a powerful laptop (4 cores, 16 gigs, ssd) it takes 28 seconds for "au build". Using a running idle watcher and saving two files simultaneously takes, with the new logic, 34 seconds. Every additional changed file increases that time even more.

Similar to what @Thanood suggests, previous efforts to fix this ourselves included mostly things like trying to debounce, serialize or even drop successive build requests, all of which were not successful. The reason I started searching for the issue here again was that I'm now convinced the root cause for the original problem is the way gulp.watch is used, and I wanted to know if this has been discussed before. Only then did I realize there has been an accepted fix (we don't check/update the project template and tasks themselves regularly unless a change is advised in some change log).

@EisenbergEffect

This comment has been minimized.

Show comment
Hide comment
@EisenbergEffect

EisenbergEffect Apr 6, 2017

Member

We're very open to changing how we're handling this, including the gulp watch piece. I'm not sure if we have ideas on what the solution would be right now. If anyone wants to take this and experiment and come back with some recommendations, we'd love that.

Member

EisenbergEffect commented Apr 6, 2017

We're very open to changing how we're handling this, including the gulp watch piece. I'm not sure if we have ideas on what the solution would be right now. If anyone wants to take this and experiment and come back with some recommendations, we'd love that.

@MisterGoodcat

This comment has been minimized.

Show comment
Hide comment
@MisterGoodcat

MisterGoodcat Apr 8, 2017

@EisenbergEffect I decided to take that challenge :), and spent some more hours today looking into this. I think I found a very promising, comprehensive solution to the problem that not only gets rid of the crashes, but also introduces some tremendous performance improvements for the watcher scenario. Let me give you an idea of what I'm talking about:

  • As a reminder: full "au build" of our current main project: 28 seconds. Idle watcher, single file change (independent of file type): 18 seconds. Same but with change of two files: 34 seconds.
  • With all of the changes outlined below applied:
    • Idle watcher, single .ts file change: 3 seconds
    • Idle watcher, single .html file change: < 1 second
    • Idle watcher, single .less file change: 7 seconds
    • Idle watcher, change of three files (.ts, .html, .less) "simultaneously" ("save all"): 9 seconds

Basically, I applied three or four changes, depending on how you look at it.

Important: When I sat down to work on this, I decided to do my experiments on the build task (build.ts), not to the run task (run.ts). This feature ("build --watch") also has been requested before (see: #265), and since we are not using the built-in serve feature at all, it was the logical thing to do for me. The following concepts however fully apply in the same way, no matter where you put the watcher logic, and simply would require to pass some of the information described below from the run task to the build task if you prefer to put it in the run task. I would of course appreciate if that "build --watch" I just created also found it's way into the official code base :).

1. Change the way you use gulp.watch

If you would only want to fix the crashes and keep the incremental build features of TypeScript, you would need to do the following:

  • Obviously, revert the previous "fix" for the crash to re-enable incremental builds (2c74cfe)
  • Change the use of individual gulp.watch calls for each project source to a single gulp.watch that takes an array of globs instead. Since all of the individual watchers did trigger the same process anyway, this does not change any additional behavior. So basically:
let watch = function() {
  // gulp.watch(project.transpiler.source, refresh).on('change', onChange);
  // gulp.watch(project.markupProcessor.source, refresh).on('change', onChange);
  // gulp.watch(project.cssProcessor.source, refresh).on('change', onChange);

  gulp.watch(
    [
      project.transpiler.source, 
      project.markupProcessor.source, 
      project.cssProcessor.source
    ], refresh).on('change', onChange);
}

What I would recommend in addition however, is to switch from the built-in gulp.watch feature to the separate gulp-watch package (subtle difference but huge improvement). The built-in feature does not pass on information about individual file changes, which makes further optimization difficult. The changes described in the details that follow rely on this, so if you want to add that benefit, you need to switch to gulp-watch (or a similar package that supports this).

Another small optimization is to turn off content reading. By default, the watcher reads the file contents, but they are thrown away because the actual build process re-reads them using gulp.src. For gulp-watch, there's official documentation about how to turn content reading off (see options.read). Since both the built-in gulp.watch feature and gulp-watch rely on chokidar, I would suspect that should also work for gulp.watch (although it's not documented there and I didn't test that). Sample:

import * as gulpWatch from "gulp-watch";

let watch = function() {
  return gulpWatch(
    [
      project.transpiler.source,
      project.markupProcessor.source,
      project.cssProcessor.source
    ],
    {
      read: false, // performance optimization: do not read actual file contents
      verbose: true
    },
    (vinyl) => {
      // do something with the info, or simply trigger an incremental build across all sources here
    });
});

2. Debounce

The above solution fixes the crashes and improves performance for simple scenarios, but still has the "multiple successive builds" problem when more than one file is changed rapidly ("save all"). So I decided to take a look at debouncing the build. Amazingly, this took the most time to get right. The problem mostly is with finding a solution that integrates nicely with the function composition in combination with the asynchronity of gulp. If you look around you'll find quite some discussions around this and how it should be done (for example here: gulpjs/gulp#1304). I use the default "debounce" Node package. If you choose a different package, make sure it does the triggering on the trailing edge or at least can be configured/forced to do so.

The proposed solution(s) all had the problem that they could not be applied easily to the Aurelia CLI, mostly because the underlying gulp wrapper of the CLI apparently makes some assumptions about the tasks structure and crashes if these are not satisfied (I mostly had problems with the makeInjectable function in Aurelia CLI's gulp.js). Since I didn't want to dig that deep into the code base, I preferred bridging to the gulp world of things with a simple manual solution. It basically looks like that (that's pseudo-code, the full/real code follows below):

import * as debounce from "debounce";
const debounceWaitTime = 100;

let isBuilding = false;
let refresh = debounce(() => {
  if (isBuilding) {
    return;
  }

  isBuilding = true;
  
  triggerActualBuild()
	.then(() => {
		isBuilding = false;
		if (weHaveAnotherPendingBuildRequest) { refresh(); }
	});  
  
}, debounceWaitTime);

There are probably nicer solutions than that, but like I said I wanted to fix this on the project template level and not dig into the Aurelia CLI package itself.

3. Selective builds

With that in place you get rid of the crashes and prevent rapid changes to multiple files triggering multiple successive builds. Basically, you'll reach the "9 seconds" per incremental build performance level I mentioned in the beginning. Now, as you can see from these performance numbers above, our LESS task contributes the most to incremental build times, where at the same time we actually do not much of LESS editing in our day-to-day work. With that in mind I decided it would be nice to only trigger those parts of the build process which actually need to be performed, for example only do a TypeScript compile if we actually changed .ts files etc. To achieve this, here is what I did:

  • When gulp-watch triggers, push the changed file's information into a "poor man's queue" (an array) so it is preserved until the debounced function actually is triggered
  • In the refresh function, collect the actual file changes and test them against the configured globs of the project sources to determine which tasks need to be executed
  • That array then can also be nicely used to determine whether another build needs to be triggered when the current one finishes (because more file changes may have piled up during the build)

Ok, let's put the puzzle pieces together and see what we get.

import * as minimatch from "minimatch"; /* used to test paths against globs */
import * as gulp from "gulp";
import * as gulpWatch from "gulp-watch";
import * as debounce from "debounce";
// more imports as required, for Aurelia's sub tasks, project configuration etc...

const debounceWaitTime = 100;
let isBuilding = false;
let pendingRefreshPaths = [];

let watch = () => {
  return gulpWatch(
    [
      project.transpiler.source,
      project.markupProcessor.source,
      project.cssProcessor.source
    ],
    {
      read: false, // performance optimization: do not read actual file contents
      verbose: true
    },
    (vinyl) => {
      if (vinyl.path && vinyl.cwd && vinyl.path.startsWith(vinyl.cwd)) {
        let pathToAdd = vinyl.path.substr(vinyl.cwd.length + 1);
        log(`Watcher: Adding path ${pathToAdd} to pending build changes...`);
        pendingRefreshPaths.push(pathToAdd); 
        refresh();
      }
    });
});

The refresh function itself is the one that's debounced, and looks like that:

let refresh = debounce(() => {
  if (isBuilding) {
    log("Watcher: A build is already in progress, deferring change detection...");
    return;
  }

  isBuilding = true;

  let paths = pendingRefreshPaths.splice(0);
  let tasks = [];
  
  // dynamically compose tasks, note: extend as needed, for example with copyFiles, linting etc.
  if (paths.find((x) => minimatch(x, project.cssProcessor.source)) {
    log("Watcher: Adding CSS tasks to next build...");
    tasks.push(processCSS);
  }

  if (paths.find((x) => minimatch(x, project.transpiler.source)) {
    log("Watcher: Adding transpile task to next build...");
    tasks.push(transpile);
  }

  if (paths.find((x) => minimatch(x, project.markupProcessor.source)) {
    log("Watcher: Adding markup task to next build...");
    tasks.push(processMarkup);
  }

  if (tasks.length === 0) {
    log("Watcher: No relevant changes found, skipping next build.");
    isBuilding = false;
    return;
  }
  
  let toExecute = gulp.series(
    readProjectConfiguration,
    gulp.parallel(tasks),
    writeBundles,
    (done) => {
      isBuilding = false;
      done();
      if (pendingRefreshPaths.length > 0) {
        log("Watcher: Found more pending changes after finishing build, triggering next one...");
        refresh();
      }
    }
  );

  toExecute();
}, debounceWaitTime);

The remaining build.ts content stays more or less the same, I only added the --watch command line option. Like that:

let processBuildPipeline = gulp.series(
  readProjectConfiguration,
  gulp.parallel(
    transpile,
    processMarkup,
    processCSS,
    copyFiles
  ),
  writeBundles
);

let main;

if (CLIOptions.hasFlag("watch")) {
  main = gulp.series(
    processBuildPipeline,
    watch
  );
} else {
  main = processBuildPipeline;
}

export default main;

function readProjectConfiguration() {
  return build.src(project);
}

function writeBundles() {
  return build.dest();
}

function log(message: string) {
  console.log(message);
}

I hope this is detailed enough so at least some of the improvements can find their way into your code base; providing a full working sample is not that easy as I would have to go over the code base and remove more company specific details, which I also stripped from the above fragments. I would love to see the CLI improve in that direction so others can benefit from these changes too.

Cheers :)

MisterGoodcat commented Apr 8, 2017

@EisenbergEffect I decided to take that challenge :), and spent some more hours today looking into this. I think I found a very promising, comprehensive solution to the problem that not only gets rid of the crashes, but also introduces some tremendous performance improvements for the watcher scenario. Let me give you an idea of what I'm talking about:

  • As a reminder: full "au build" of our current main project: 28 seconds. Idle watcher, single file change (independent of file type): 18 seconds. Same but with change of two files: 34 seconds.
  • With all of the changes outlined below applied:
    • Idle watcher, single .ts file change: 3 seconds
    • Idle watcher, single .html file change: < 1 second
    • Idle watcher, single .less file change: 7 seconds
    • Idle watcher, change of three files (.ts, .html, .less) "simultaneously" ("save all"): 9 seconds

Basically, I applied three or four changes, depending on how you look at it.

Important: When I sat down to work on this, I decided to do my experiments on the build task (build.ts), not to the run task (run.ts). This feature ("build --watch") also has been requested before (see: #265), and since we are not using the built-in serve feature at all, it was the logical thing to do for me. The following concepts however fully apply in the same way, no matter where you put the watcher logic, and simply would require to pass some of the information described below from the run task to the build task if you prefer to put it in the run task. I would of course appreciate if that "build --watch" I just created also found it's way into the official code base :).

1. Change the way you use gulp.watch

If you would only want to fix the crashes and keep the incremental build features of TypeScript, you would need to do the following:

  • Obviously, revert the previous "fix" for the crash to re-enable incremental builds (2c74cfe)
  • Change the use of individual gulp.watch calls for each project source to a single gulp.watch that takes an array of globs instead. Since all of the individual watchers did trigger the same process anyway, this does not change any additional behavior. So basically:
let watch = function() {
  // gulp.watch(project.transpiler.source, refresh).on('change', onChange);
  // gulp.watch(project.markupProcessor.source, refresh).on('change', onChange);
  // gulp.watch(project.cssProcessor.source, refresh).on('change', onChange);

  gulp.watch(
    [
      project.transpiler.source, 
      project.markupProcessor.source, 
      project.cssProcessor.source
    ], refresh).on('change', onChange);
}

What I would recommend in addition however, is to switch from the built-in gulp.watch feature to the separate gulp-watch package (subtle difference but huge improvement). The built-in feature does not pass on information about individual file changes, which makes further optimization difficult. The changes described in the details that follow rely on this, so if you want to add that benefit, you need to switch to gulp-watch (or a similar package that supports this).

Another small optimization is to turn off content reading. By default, the watcher reads the file contents, but they are thrown away because the actual build process re-reads them using gulp.src. For gulp-watch, there's official documentation about how to turn content reading off (see options.read). Since both the built-in gulp.watch feature and gulp-watch rely on chokidar, I would suspect that should also work for gulp.watch (although it's not documented there and I didn't test that). Sample:

import * as gulpWatch from "gulp-watch";

let watch = function() {
  return gulpWatch(
    [
      project.transpiler.source,
      project.markupProcessor.source,
      project.cssProcessor.source
    ],
    {
      read: false, // performance optimization: do not read actual file contents
      verbose: true
    },
    (vinyl) => {
      // do something with the info, or simply trigger an incremental build across all sources here
    });
});

2. Debounce

The above solution fixes the crashes and improves performance for simple scenarios, but still has the "multiple successive builds" problem when more than one file is changed rapidly ("save all"). So I decided to take a look at debouncing the build. Amazingly, this took the most time to get right. The problem mostly is with finding a solution that integrates nicely with the function composition in combination with the asynchronity of gulp. If you look around you'll find quite some discussions around this and how it should be done (for example here: gulpjs/gulp#1304). I use the default "debounce" Node package. If you choose a different package, make sure it does the triggering on the trailing edge or at least can be configured/forced to do so.

The proposed solution(s) all had the problem that they could not be applied easily to the Aurelia CLI, mostly because the underlying gulp wrapper of the CLI apparently makes some assumptions about the tasks structure and crashes if these are not satisfied (I mostly had problems with the makeInjectable function in Aurelia CLI's gulp.js). Since I didn't want to dig that deep into the code base, I preferred bridging to the gulp world of things with a simple manual solution. It basically looks like that (that's pseudo-code, the full/real code follows below):

import * as debounce from "debounce";
const debounceWaitTime = 100;

let isBuilding = false;
let refresh = debounce(() => {
  if (isBuilding) {
    return;
  }

  isBuilding = true;
  
  triggerActualBuild()
	.then(() => {
		isBuilding = false;
		if (weHaveAnotherPendingBuildRequest) { refresh(); }
	});  
  
}, debounceWaitTime);

There are probably nicer solutions than that, but like I said I wanted to fix this on the project template level and not dig into the Aurelia CLI package itself.

3. Selective builds

With that in place you get rid of the crashes and prevent rapid changes to multiple files triggering multiple successive builds. Basically, you'll reach the "9 seconds" per incremental build performance level I mentioned in the beginning. Now, as you can see from these performance numbers above, our LESS task contributes the most to incremental build times, where at the same time we actually do not much of LESS editing in our day-to-day work. With that in mind I decided it would be nice to only trigger those parts of the build process which actually need to be performed, for example only do a TypeScript compile if we actually changed .ts files etc. To achieve this, here is what I did:

  • When gulp-watch triggers, push the changed file's information into a "poor man's queue" (an array) so it is preserved until the debounced function actually is triggered
  • In the refresh function, collect the actual file changes and test them against the configured globs of the project sources to determine which tasks need to be executed
  • That array then can also be nicely used to determine whether another build needs to be triggered when the current one finishes (because more file changes may have piled up during the build)

Ok, let's put the puzzle pieces together and see what we get.

import * as minimatch from "minimatch"; /* used to test paths against globs */
import * as gulp from "gulp";
import * as gulpWatch from "gulp-watch";
import * as debounce from "debounce";
// more imports as required, for Aurelia's sub tasks, project configuration etc...

const debounceWaitTime = 100;
let isBuilding = false;
let pendingRefreshPaths = [];

let watch = () => {
  return gulpWatch(
    [
      project.transpiler.source,
      project.markupProcessor.source,
      project.cssProcessor.source
    ],
    {
      read: false, // performance optimization: do not read actual file contents
      verbose: true
    },
    (vinyl) => {
      if (vinyl.path && vinyl.cwd && vinyl.path.startsWith(vinyl.cwd)) {
        let pathToAdd = vinyl.path.substr(vinyl.cwd.length + 1);
        log(`Watcher: Adding path ${pathToAdd} to pending build changes...`);
        pendingRefreshPaths.push(pathToAdd); 
        refresh();
      }
    });
});

The refresh function itself is the one that's debounced, and looks like that:

let refresh = debounce(() => {
  if (isBuilding) {
    log("Watcher: A build is already in progress, deferring change detection...");
    return;
  }

  isBuilding = true;

  let paths = pendingRefreshPaths.splice(0);
  let tasks = [];
  
  // dynamically compose tasks, note: extend as needed, for example with copyFiles, linting etc.
  if (paths.find((x) => minimatch(x, project.cssProcessor.source)) {
    log("Watcher: Adding CSS tasks to next build...");
    tasks.push(processCSS);
  }

  if (paths.find((x) => minimatch(x, project.transpiler.source)) {
    log("Watcher: Adding transpile task to next build...");
    tasks.push(transpile);
  }

  if (paths.find((x) => minimatch(x, project.markupProcessor.source)) {
    log("Watcher: Adding markup task to next build...");
    tasks.push(processMarkup);
  }

  if (tasks.length === 0) {
    log("Watcher: No relevant changes found, skipping next build.");
    isBuilding = false;
    return;
  }
  
  let toExecute = gulp.series(
    readProjectConfiguration,
    gulp.parallel(tasks),
    writeBundles,
    (done) => {
      isBuilding = false;
      done();
      if (pendingRefreshPaths.length > 0) {
        log("Watcher: Found more pending changes after finishing build, triggering next one...");
        refresh();
      }
    }
  );

  toExecute();
}, debounceWaitTime);

The remaining build.ts content stays more or less the same, I only added the --watch command line option. Like that:

let processBuildPipeline = gulp.series(
  readProjectConfiguration,
  gulp.parallel(
    transpile,
    processMarkup,
    processCSS,
    copyFiles
  ),
  writeBundles
);

let main;

if (CLIOptions.hasFlag("watch")) {
  main = gulp.series(
    processBuildPipeline,
    watch
  );
} else {
  main = processBuildPipeline;
}

export default main;

function readProjectConfiguration() {
  return build.src(project);
}

function writeBundles() {
  return build.dest();
}

function log(message: string) {
  console.log(message);
}

I hope this is detailed enough so at least some of the improvements can find their way into your code base; providing a full working sample is not that easy as I would have to go over the code base and remove more company specific details, which I also stripped from the above fragments. I would love to see the CLI improve in that direction so others can benefit from these changes too.

Cheers :)

@EisenbergEffect

This comment has been minimized.

Show comment
Hide comment
@EisenbergEffect

EisenbergEffect Apr 8, 2017

Member

@MisterGoodcat Fantastic. Thank you so much for looking into this. @JeroenVinke Can you review this and explore using this research to improve our default tasks?

Member

EisenbergEffect commented Apr 8, 2017

@MisterGoodcat Fantastic. Thank you so much for looking into this. @JeroenVinke Can you review this and explore using this research to improve our default tasks?

@MisterGoodcat

This comment has been minimized.

Show comment
Hide comment
@MisterGoodcat

MisterGoodcat Apr 14, 2017

Quick update: one of my teams used these improvements during the week, and confirms the watcher is rock solid now. It even survives switching large branches, when dozens of files change simultaneously. We can also confirm the above performance numbers accurately reflect day-to-day use. I'm glad I invested that time; it really improved working on the code base a lot.

MisterGoodcat commented Apr 14, 2017

Quick update: one of my teams used these improvements during the week, and confirms the watcher is rock solid now. It even survives switching large branches, when dozens of files change simultaneously. We can also confirm the above performance numbers accurately reflect day-to-day use. I'm glad I invested that time; it really improved working on the code base a lot.

@JeroenVinke

This comment has been minimized.

Show comment
Hide comment
@JeroenVinke

JeroenVinke Apr 14, 2017

Member

@MisterGoodcat Glad to hear! Thanks for your work on this. I will pick this up, I just need to finish work on a couple of other issues first

Member

JeroenVinke commented Apr 14, 2017

@MisterGoodcat Glad to hear! Thanks for your work on this. I will pick this up, I just need to finish work on a couple of other issues first

@JeroenVinke JeroenVinke reopened this May 12, 2017

@jwx

This comment has been minimized.

Show comment
Hide comment
@jwx

jwx May 19, 2017

Contributor

I'm working on the PR for this and could use a hand when it comes to real life testing. So if you've got a project that's experiencing some of the issues mentioned above (crashes when saving multiple files and poor performance from the watcher), it'd be great if you could get in touch.

Contributor

jwx commented May 19, 2017

I'm working on the PR for this and could use a hand when it comes to real life testing. So if you've got a project that's experiencing some of the issues mentioned above (crashes when saving multiple files and poor performance from the watcher), it'd be great if you could get in touch.

@gama410

This comment has been minimized.

Show comment
Hide comment
@gama410

gama410 May 22, 2017

I have not had any problem since april with the fixes introduced in that version.

gama410 commented May 22, 2017

I have not had any problem since april with the fixes introduced in that version.

@MisterGoodcat

This comment has been minimized.

Show comment
Hide comment
@MisterGoodcat

MisterGoodcat May 22, 2017

@jwx If you want to reproduce the original crash, a trivial setup is sufficient. Like:

  • Create a new project. Make sure you have at least one .ts file and one .html file in your src folder
  • Start the watcher
  • Change both the .ts and .html file (trivial change like a white space or similar), but do not save
  • Use the "save all" feature of your editor to invoke both internal gulp watchers in a short time frame
  • Watch the crash

With that, I could reproduce the issue 100% of the time. Meaning, if you are not able to crash the watcher that way, you have fixed the issue :).

For performance testing, I'm willing to help, but you would have to provide the template, improved build tasks or some other instructions to me, as the projects in question are non-public.

@gama410 You are right that the crash is currently fixed; however the more recent parts of the discussion focused on an alternative solution that not only fixes the crash but also improves performance of the watcher tremendously.

MisterGoodcat commented May 22, 2017

@jwx If you want to reproduce the original crash, a trivial setup is sufficient. Like:

  • Create a new project. Make sure you have at least one .ts file and one .html file in your src folder
  • Start the watcher
  • Change both the .ts and .html file (trivial change like a white space or similar), but do not save
  • Use the "save all" feature of your editor to invoke both internal gulp watchers in a short time frame
  • Watch the crash

With that, I could reproduce the issue 100% of the time. Meaning, if you are not able to crash the watcher that way, you have fixed the issue :).

For performance testing, I'm willing to help, but you would have to provide the template, improved build tasks or some other instructions to me, as the projects in question are non-public.

@gama410 You are right that the crash is currently fixed; however the more recent parts of the discussion focused on an alternative solution that not only fixes the crash but also improves performance of the watcher tremendously.

jwx added a commit to jwx/cli that referenced this issue May 25, 2017

fix/enhancement(cli) improve --watch flag
The improved --watch flag causes fewer crashes, performs better and is now also available on the build command.

Closes aurelia/cli#293 and aurelia/cli#265.
@jwx

This comment has been minimized.

Show comment
Hide comment
@jwx

jwx May 25, 2017

Contributor

@MisterGoodcat I can't reproduce the issue according to your description, so at least that is taken care of.

It'd be great if you could see if you experience any improvement in performance. You can install the branch with: npm install git://github.com/jwx/cli#improve-tasks-watch. The new/changed tasks are build, run and watch.

Please let me know here or on gitter if you've got any questions. Thanks!

Contributor

jwx commented May 25, 2017

@MisterGoodcat I can't reproduce the issue according to your description, so at least that is taken care of.

It'd be great if you could see if you experience any improvement in performance. You can install the branch with: npm install git://github.com/jwx/cli#improve-tasks-watch. The new/changed tasks are build, run and watch.

Please let me know here or on gitter if you've got any questions. Thanks!

jwx added a commit to jwx/cli that referenced this issue May 25, 2017

fix/enhancement(cli) improve --watch flag
The improved --watch flag causes fewer crashes, performs better and is now also available on the build command.

Closes aurelia/cli#293 and aurelia/cli#265.
@MisterGoodcat

This comment has been minimized.

Show comment
Hide comment
@MisterGoodcat

MisterGoodcat May 28, 2017

@jwx Beautiful! I did not perform detailed benchmarks, but using your branch results in approximately the same performance as with my own implementation. Even with our largest project, and changing a dozen html and TypeScript files simultaneously, it only takes a few seconds to finish - perfect! And of course that means I also can confirm the crash is taken care of with your changes.

Thank you also for adding the watch option to the build task :)

MisterGoodcat commented May 28, 2017

@jwx Beautiful! I did not perform detailed benchmarks, but using your branch results in approximately the same performance as with my own implementation. Even with our largest project, and changing a dozen html and TypeScript files simultaneously, it only takes a few seconds to finish - perfect! And of course that means I also can confirm the crash is taken care of with your changes.

Thank you also for adding the watch option to the build task :)

@jwx

This comment has been minimized.

Show comment
Hide comment
@jwx

jwx May 29, 2017

Contributor

@MisterGoodcat Thanks for taking a look! Glad it looks okay. I'm not surprised it performs like your implementation, though, because a lot of it pretty much is your implementation. :)

Contributor

jwx commented May 29, 2017

@MisterGoodcat Thanks for taking a look! Glad it looks okay. I'm not surprised it performs like your implementation, though, because a lot of it pretty much is your implementation. :)

jwx added a commit to jwx/cli that referenced this issue Jun 7, 2017

fix/enhancement(cli) improve --watch flag
The improved --watch flag causes fewer crashes, performs better and is now also available on the build command.

Closes aurelia/cli#293 and aurelia/cli#265.

jwx added a commit to jwx/cli that referenced this issue Jun 9, 2017

fix/enhancement(cli) improve --watch flag
The improved --watch flag causes fewer crashes, performs better and is now also available on the build command.

Closes aurelia/cli#293 and aurelia/cli#265.

jwx added a commit to jwx/cli that referenced this issue Jun 9, 2017

fix/enhancement(cli) improve --watch flag
The improved --watch flag causes fewer crashes, performs better and is now also available on the build command.

Closes aurelia/cli#293 and aurelia/cli#265.

jwx added a commit to jwx/cli that referenced this issue Jun 9, 2017

fix/enhancement(cli) improve --watch flag
The improved --watch flag causes fewer crashes, performs better and is now also available on the build command.

Closes aurelia/cli#293 and aurelia/cli#265.

jwx added a commit to jwx/cli that referenced this issue Jun 10, 2017

fix/enhancement(cli) improve --watch flag
The improved --watch flag causes fewer crashes, performs better and is now also available on the build command.

Closes aurelia/cli#293 and aurelia/cli#265.

jwx added a commit to jwx/cli that referenced this issue Jun 10, 2017

fix/enhancement(cli) improve --watch flag
The improved --watch flag causes fewer crashes, performs better and is now also available on the build command.

Closes aurelia/cli#293 and aurelia/cli#265.

jwx added a commit to jwx/cli that referenced this issue Jun 10, 2017

fix/enhancement(cli) improve --watch flag
The improved --watch flag causes fewer crashes, performs better and is now also available on the build command.

Closes aurelia/cli#293 and aurelia/cli#265.

@JeroenVinke JeroenVinke closed this in #632 Jun 10, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment