Skip to content

Commit

Permalink
Simplify pref test scripts (prettier#12699)
Browse files Browse the repository at this point in the history
  • Loading branch information
fisker authored and medikoo committed Jan 3, 2024
1 parent 9f5c3ce commit 5e3af8a
Show file tree
Hide file tree
Showing 5 changed files with 26 additions and 19 deletions.
14 changes: 5 additions & 9 deletions CONTRIBUTING.md
Expand Up @@ -78,24 +78,20 @@ If you want to know more about Prettier's GitHub labels, see the [Issue Labels](

If you're contributing a performance improvement, the following Prettier CLI options can help:

- `--debug-repeat N` uses a naïve loop to repeat the formatting `N` times and measures the average run duration. It can be useful to highlight hot functions in the profiler. The measurements are printed at the debug log level, use `--loglevel debug` to see them.
- `--debug-benchmark` uses [`benchmark`](https://npm.im/benchmark) module to produce statistically significant duration measurements. The measurements are printed at the debug log level, use `--loglevel debug` to see them.
- `--debug-repeat N` uses a naïve loop to repeat the formatting `N` times and measures the average run duration. It can be useful to highlight hot functions in the profiler. This can also set by environment variable `PRETTIER_PERF_REPEAT`.
- `--debug-benchmark` uses [`benchmark`](https://npm.im/benchmark) module to produce statistically significant duration measurements.

For convenience, the following commands for profiling are available via `package.json` `scripts`.

_Unfortunately, [`yarn` simply appends passed arguments to commands, cannot reference them by name](https://github.com/yarnpkg/yarn/issues/5207), so we have to use inline environment variables to pass them._

- `PERF_FILE=<filename> PERF_REPEAT=[number-of-repetitions:1000] yarn perf:repeat` starts the naïve loop. See the CLI output for when the measurements finish, and stop profiling at that moment.
- `PERF_FILE=<filename> PERF_REPEAT=[number-of-repetitions:1000] yarn perf:repeat-inspect` starts the naïve loop with `node --inspect-brk` flag that pauses execution and waits for Chromium/Chrome/Node Inspector to attach. Open [`chrome://inspect`](chrome://inspect), select the process to inspect, and activate the CPU Profiler, this will unpause execution. See the CLI output for when the measurements finish, and stop the CPU Profiler at that moment to avoid collecting more data than needed.
- `PERF_FILE=<filename> yarn perf:benchmark` starts the `benchmark`-powered measurements. See the CLI output for when the measurements finish.
- `PRETTIER_PERF_REPEAT=1000 yarn perf <filename>` starts the naïve loop. See the CLI output for when the measurements finish, and stop profiling at that moment.
- `PRETTIER_PERF_REPEAT=1000 yarn perf:inspect <filename>` starts the naïve loop with `node --inspect-brk` flag that pauses execution and waits for Chromium/Chrome/Node Inspector to attach. Open [`chrome://inspect`](chrome://inspect), select the process to inspect, and activate the CPU Profiler, this will unpause execution. See the CLI output for when the measurements finish, and stop the CPU Profiler at that moment to avoid collecting more data than needed.
- `yarn perf:benchmark <filename>` starts the `benchmark`-powered measurements. See the CLI output for when the measurements finish.

In the above commands:

- `yarn && yarn build` ensures the compiler-optimized version of Prettier is built prior to launching it. Prettier's own environment checks are defaulted to production and removed during the build. The build output is cached, so a rebuild will happen only if the source code changes.
- `NODE_ENV=production` ensures Prettier and its dependencies run in production mode.
- `node --inspect-brk` pauses the script execution until Inspector is connected to the Node process.
- `--loglevel debug` ensures the `--debug-repeat` or `--debug-benchmark` measurements are printed to `stderr`.
- `> /dev/null` ensures the formatted output is discarded.

In addition to the options above, you can use [`node --prof` and `node --prof-process`](https://nodejs.org/en/docs/guides/simple-profiling/), as well as `node --trace-opt --trace-deopt`, to get more advanced performance insights.

Expand Down
7 changes: 5 additions & 2 deletions changelog_unreleased/cli/12682.md
@@ -1,3 +1,6 @@
#### Skip print code or write file when performance flags found (#12682 by @fisker)
#### Simplify performance test (#12682, #12698 by @fisker)

When `--debug-benchmark` or `--debug-repeat` is used, the CLI will skip print the formatted code to the screen or write files.
When `--debug-benchmark` or `--debug-repeat` is passed:

1. The CLI skips print code to the screen or write file
1. Set log level to `debug` automatically
6 changes: 3 additions & 3 deletions package.json
Expand Up @@ -142,9 +142,9 @@
"test:dist-standalone": "cross-env NODE_ENV=production TEST_STANDALONE=1 jest",
"test:integration": "jest tests/integration",
"test:dist-lint": "eslint --no-eslintrc --no-ignore --no-inline-config --config=./scripts/bundle-eslint-config.cjs \"dist/**/*.{js,mjs}\"",
"perf:repeat": "yarn && yarn build && cross-env NODE_ENV=production node ./dist/bin-prettier.js --debug-repeat ${PERF_REPEAT:-1000} ${PERF_FILE:-./index.js} > /dev/null",
"perf:repeat-inspect": "yarn && yarn build && cross-env NODE_ENV=production node --inspect-brk ./dist/bin-prettier.js --debug-repeat ${PERF_REPEAT:-1000} ${PERF_FILE:-./index.js} > /dev/null",
"perf:benchmark": "yarn && yarn build && cross-env NODE_ENV=production node ./dist/bin-prettier.js --debug-benchmark ${PERF_FILE:-./index.js} > /dev/null",
"perf": "yarn && yarn build && cross-env NODE_ENV=production node ./dist/bin-prettier.js",
"perf:inspect": "yarn && yarn build && cross-env NODE_ENV=production node --inspect-brk ./dist/bin-prettier.js",
"perf:benchmark": "yarn perf --debug-benchmark",
"lint": "run-p lint:*",
"lint:typecheck": "tsc",
"lint:eslint": "cross-env EFF_NO_LINK_RULES=true eslint . --format friendly",
Expand Down
8 changes: 8 additions & 0 deletions src/cli/context.js
Expand Up @@ -75,6 +75,14 @@ class Context {
debugRepeat,
};
}

const { PRETTIER_PERF_REPEAT } = process.env;
if (PRETTIER_PERF_REPEAT && /^\d+$/.test(PRETTIER_PERF_REPEAT)) {
return {
name: "PRETTIER_PERF_REPEAT (environment variable)",
debugRepeat: Number(PRETTIER_PERF_REPEAT),
};
}
}
}

Expand Down
10 changes: 5 additions & 5 deletions src/cli/format.js
Expand Up @@ -165,8 +165,8 @@ function format(context, input, opt) {
return { formatted: pp, filepath: opt.filepath || "(stdin)\n" };
}

/* istanbul ignore next */
if (context.argv.debugBenchmark) {
const { performanceTestFlag } = context;
if (performanceTestFlag?.debugBenchmark) {
let benchmark;
try {
// eslint-disable-next-line import/no-extraneous-dependencies
Expand Down Expand Up @@ -197,7 +197,7 @@ function format(context, input, opt) {
);
})
.run({ async: false });
} else if (context.argv.debugRepeat > 0) {
} else if (performanceTestFlag?.debugRepeat) {
const repeat = context.argv.debugRepeat;
context.logger.debug(
"'--debug-repeat' option found, running formatWithCursor " +
Expand Down Expand Up @@ -289,8 +289,9 @@ async function formatFiles(context) {
const ignorer = await createIgnorerFromContextOrDie(context);

let numberOfUnformattedFilesFound = 0;
const { performanceTestFlag } = context;

if (context.argv.check && !context.performanceTestFlag) {
if (context.argv.check && !performanceTestFlag) {
context.logger.log("Checking formatting...");
}

Expand Down Expand Up @@ -379,7 +380,6 @@ async function formatFiles(context) {
printedFilename.clear();
}

const { performanceTestFlag } = context;
if (performanceTestFlag) {
context.logger.log(
`'${performanceTestFlag.name}' option found, skipped print code or write files.`
Expand Down

0 comments on commit 5e3af8a

Please sign in to comment.