Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test: Add script which allows running all of the profiling tests and compare results #33186

Closed
wants to merge 5 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
2 changes: 2 additions & 0 deletions .gitignore
Expand Up @@ -39,3 +39,5 @@ yarn-error.log
# User specific bazel settings
.bazelrc.user

.notes.md
baseline.json
48 changes: 48 additions & 0 deletions packages/core/test/render3/perf/README.md
@@ -1,19 +1,67 @@
### Build

```
yarn bazel build //packages/core/test/render3/perf:{name}.min_debug.es2015.js --define=compile=aot
```

### Run

```
node dist/bin/packages/core/test/render3/perf/{name}.min_debug.es2015.js
```

### Profile

```
node --no-turbo-inlining --inspect-brk dist/bin/packages/core/test/render3/perf/{name}.min_debug.es2015.js
```

then connect with a debugger (the `--inspect-brk` option will make sure that benchmark execution doesn't start until a debugger is connected and the code execution is manually resumed).

The actual benchmark code has calls that will start (`console.profile`) and stop (`console.profileEnd`) a profiling session.

### Run All

To run all of the benchmarks use the `profile_all.js` script:
```
node packages/core/test/render3/perf/profile_all.js
```

NOTE: This command will build all of the tests, so there is no need to do so manually.

Optionally use the `--write` command to save the run result to a file for later comparison.


```
node packages/core/test/render3/perf/profile_all.js --write baseline.json
```

### Comparing Runs

If you have saved the baseline (as described in the step above) you can use it to get change in performance like so:

```
node packages/core/test/render3/perf/profile_all.js --read baseline.json
```

The resulting output should look something like this:
```
┌────────────────────────────────────┬─────────┬──────┬───────────┬───────────┬───────┐
│ (index) │ time │ unit │ base_time │ base_unit │ % │
├────────────────────────────────────┼─────────┼──────┼───────────┼───────────┼───────┤
│ directive_instantiate │ 276.652 │ 'ms' │ 286.292 │ 'ms' │ -3.37 │
│ element_text_create │ 262.868 │ 'ms' │ 260.031 │ 'ms' │ 1.09 │
│ interpolation │ 257.733 │ 'us' │ 260.489 │ 'us' │ -1.06 │
│ listeners │ 1.997 │ 'us' │ 1.985 │ 'us' │ 0.6 │
│ map_based_style_and_class_bindings │ 10.07 │ 'ms' │ 9.786 │ 'ms' │ 2.9 │
│ noop_change_detection │ 93.256 │ 'us' │ 91.745 │ 'us' │ 1.65 │
│ property_binding │ 290.777 │ 'us' │ 280.586 │ 'us' │ 3.63 │
│ property_binding_update │ 588.545 │ 'us' │ 583.334 │ 'us' │ 0.89 │
│ style_and_class_bindings │ 1.061 │ 'ms' │ 1.047 │ 'ms' │ 1.34 │
│ style_binding │ 543.841 │ 'us' │ 545.385 │ 'us' │ -0.28 │
└────────────────────────────────────┴─────────┴──────┴───────────┴───────────┴───────┘
```

### Notes

In all the above commands {name} should be replaced with the actual benchmark (folder) name, ex.:
Expand Down
Expand Up @@ -16,8 +16,8 @@ import {createAndRenderLView} from '../setup';
class Tooltip {
tooltip?: string;
position?: string;
static ngFactoryDef = () => new Tooltip();
static ngDirectiveDef = ɵɵdefineDirective({
static ɵfac = () => new Tooltip();
static ɵdir = ɵɵdefineDirective({
type: Tooltip,
selectors: [['', 'tooltip', '']],
inputs: {tooltip: 'tooltip', position: 'position'}
Expand Down Expand Up @@ -75,7 +75,7 @@ function testTemplate(rf: RenderFlags, ctx: any) {

const viewTNode = createTNode(null !, null, TNodeType.View, -1, null, null) as TViewNode;
const embeddedTView = createTView(
-1, testTemplate, 21, 10, [Tooltip.ngDirectiveDef], null, null, null,
-1, testTemplate, 21, 10, [Tooltip.ɵdir], null, null, null,
[['position', 'top', 3, 'tooltip']]);

// initialize global state
Expand Down
2 changes: 1 addition & 1 deletion packages/core/test/render3/perf/micro_bench.ts
Expand Up @@ -7,7 +7,7 @@
*/
const performance = require('perf_hooks').performance;

const MIN_SAMPLE_COUNT_NO_IMPROVEMENT = 10;
const MIN_SAMPLE_COUNT_NO_IMPROVEMENT = 30;
const MIN_SAMPLE_DURATION = 100;

const UNITS = ['ms', 'us', 'ns', 'ps'];
Expand Down
96 changes: 96 additions & 0 deletions packages/core/test/render3/perf/profile_all.js
@@ -0,0 +1,96 @@
/**
* @license
* Copyright Google Inc. All Rights Reserved.
*
* Use of this source code is governed by an MIT-style license that can be
* found in the LICENSE file at https://angular.io/license
*/

const shell = require('shelljs');
const fs = require('fs');
const path = require('path');

const argv = process.argv;
const baseDir = path.dirname(argv[1]);
const readPath = argv[2] == '--read' ? argv[3] : null;
const writePath = argv[2] == '--write' ? argv[3] : null;

const UNITS = {
'ps': 1e-12,
'ns': 1e-9,
'us': 1e-6,
'ms': 1e-3,
's': 1,
};

// Contains the list of tests which should be built and profiled
const profileTests =
shell.ls(baseDir).filter((filename) => fs.statSync(path.join(baseDir, filename)).isDirectory());

// build tests
shell.exec(
`yarn bazel build --define=compile=aot ` +
profileTests.map((name) => `//packages/core/test/render3/perf:${name}.min_debug.es2015.js`)
.join(' '));

// profile tests
// tslint:disable-next-line:no-console
console.log('------------------------------------------------');
// tslint:disable-next-line:no-console
console.log('PROFILING');
// tslint:disable-next-line:no-console
console.log('------------------------------------------------');

// This stores the results of the run
const times = {};


// If we have a readPath than read it into the `times`
if (readPath) {
const json = JSON.parse(shell.cat(readPath));
Object.keys(json).forEach((name) => {
const run = json[name];
times[name] = {
name: run.name,
base_time: run.time,
base_unit: run.unit,
};
});
}
profileTests.forEach((name) => {
// tslint:disable-next-line:no-console
console.log('----------------', name, '----------------');
const log =
shell.exec(`node dist/bin/packages/core/test/render3/perf/${name}.min_debug.es2015.js`);
if (log.code != 0) throw new Error(log);
const matches = log.stdout.match(/: ([\d\.]+) (.s)/);
const runTime = times[name] || (times[name] = {name: name});
runTime.time = Number.parseFloat(matches[1]);
runTime.unit = matches[2];
if (runTime.base_unit) {
const time = runTime.time * UNITS[runTime.unit];
const base_time = runTime.base_time * UNITS[runTime.base_unit];
const change = (time - base_time) / base_time * 100;
runTime['%'] = Number.parseFloat(change.toFixed(2));
}
});
// tslint:disable-next-line:no-console
console.log('================================================');

// If we have the writePath than write the `times` to file
if (writePath) {
const baseTimes = {};
profileTests.forEach((name) => {
const run = times[name];
baseTimes[name] = {
name: run.name,
time: run.time,
unit: run.unit,
};
});
fs.writeFileSync(writePath, JSON.stringify(baseTimes, undefined, 2));
}

// Pretty print the table with the run information
// tslint:disable-next-line:no-console
console.table(times, ['time', 'unit', 'base_time', 'base_unit', '%']);