Skip to content

Commit

Permalink
doc: add benchmark/README.md and fix guide
Browse files Browse the repository at this point in the history
* Write a new benchmark/README.md describing the benchmark
  directory layout and common API.
* Fix the moved benchmarking guide accordingly, add tips about how
  to get the help text from the benchmarking tools.

PR-URL: #11237
Fixes: #11190
Reviewed-By: James M Snell <jasnell@gmail.com>
Reviewed-By: Andreas Madsen <amwebdk@gmail.com>
  • Loading branch information
joyeecheung authored and addaleax committed Feb 22, 2017
1 parent 22a6edd commit 5d12fd9
Show file tree
Hide file tree
Showing 2 changed files with 278 additions and 22 deletions.
246 changes: 246 additions & 0 deletions benchmark/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,246 @@
# Node.js Core Benchmarks

This folder contains code and data used to measure performance
of different Node.js implementations and different ways of
writing JavaScript run by the built-in JavaScript engine.

For a detailed guide on how to write and run benchmarks in this
directory, see [the guide on benchmarks](../doc/guides/writing-and-running-benchmarks.md).

## Table of Contents

* [Benchmark directories](#benchmark-directories)
* [Common API](#common-api)

## Benchmark Directories

<table>
<thead>
<tr>
<th>Directory</th>
<th>Purpose</th>
</tr>
</thead>
<tbody>
<tr>
<td>arrays</td>
<td>
Benchmarks for various operations on array-like objects,
including <code>Array</code>, <code>Buffer</code>, and typed arrays.
</td>
</tr>
<tr>
<td>assert</td>
<td>
Benchmarks for the <code>assert</code> subsystem.
</td>
</tr>
<tr>
<td>buffers</td>
<td>
Benchmarks for the <code>buffer</code> subsystem.
</td>
</tr>
<tr>
<td>child_process</td>
<td>
Benchmarks for the <code>child_process</code> subsystem.
</td>
</tr>
<tr>
<td>crypto</td>
<td>
Benchmarks for the <code>crypto</code> subsystem.
</td>
</tr>
<tr>
<td>dgram</td>
<td>
Benchmarks for the <code>dgram</code> subsystem.
</td>
</tr>
<tr>
<td>domain</td>
<td>
Benchmarks for the <code>domain</code> subsystem.
</td>
</tr>
<tr>
<td>es</td>
<td>
Benchmarks for various new ECMAScript features and their
pre-ES2015 counterparts.
</td>
</tr>
<tr>
<td>events</td>
<td>
Benchmarks for the <code>events</code> subsystem.
</td>
</tr>
<tr>
<td>fixtures</td>
<td>
Benchmarks fixtures used in various benchmarks throughout
the benchmark suite.
</td>
</tr>
<tr>
<td>fs</td>
<td>
Benchmarks for the <code>fs</code> subsystem.
</td>
</tr>
<tr>
<td>http</td>
<td>
Benchmarks for the <code>http</code> subsystem.
</td>
</tr>
<tr>
<td>misc</td>
<td>
Miscellaneous benchmarks and benchmarks for shared
internal modules.
</td>
</tr>
<tr>
<td>module</td>
<td>
Benchmarks for the <code>module</code> subsystem.
</td>
</tr>
<tr>
<td>net</td>
<td>
Benchmarks for the <code>net</code> subsystem.
</td>
</tr>
<tr>
<td>path</td>
<td>
Benchmarks for the <code>path</code> subsystem.
</td>
</tr>
<tr>
<td>process</td>
<td>
Benchmarks for the <code>process</code> subsystem.
</td>
</tr>
<tr>
<td>querystring</td>
<td>
Benchmarks for the <code>querystring</code> subsystem.
</td>
</tr>
<tr>
<td>streams</td>
<td>
Benchmarks for the <code>streams</code> subsystem.
</td>
</tr>
<tr>
<td>string_decoder</td>
<td>
Benchmarks for the <code>string_decoder</code> subsystem.
</td>
</tr>
<tr>
<td>timers</td>
<td>
Benchmarks for the <code>timers</code> subsystem, including
<code>setTimeout</code>, <code>setInterval</code>, .etc.
</td>
</tr>
<tr>
<td>tls</td>
<td>
Benchmarks for the <code>tls</code> subsystem.
</td>
</tr>
<tr>
<td>url</td>
<td>
Benchmarks for the <code>url</code> subsystem, including the legacy
<code>url</code> implementation and the WHATWG URL implementation.
</td>
</tr>
<tr>
<td>util</td>
<td>
Benchmarks for the <code>util</code> subsystem.
</td>
</tr>
<tr>
<td>vm</td>
<td>
Benchmarks for the <code>vm</code> subsystem.
</td>
</tr>
</tbody>
</table>

### Other Top-level files

The top-level files include common dependencies of the benchmarks
and the tools for launching benchmarks and visualizing their output.
The actual benchmark scripts should be placed in their corresponding
directories.

* `_benchmark_progress.js`: implements the progress bar displayed
when running `compare.js`
* `_cli.js`: parses the command line arguments passed to `compare.js`,
`run.js` and `scatter.js`
* `_cli.R`: parses the command line arguments passed to `compare.R`
* `_http-benchmarkers.js`: selects and runs external tools for benchmarking
the `http` subsystem.
* `common.js`: see [Common API](#common-api).
* `compare.js`: command line tool for comparing performance between different
Node.js binaries.
* `compare.R`: R script for statistically analyzing the output of
`compare.js`
* `run.js`: command line tool for running individual benchmark suite(s).
* `scatter.js`: command line tool for comparing the performance
between different parameters in benchmark configurations,
for example to analyze the time complexity.
* `scatter.R`: R script for visualizing the output of `scatter.js` with
scatter plots.

## Common API

The common.js module is used by benchmarks for consistency across repeated
tasks. It has a number of helpful functions and properties to help with
writing benchmarks.

### createBenchmark(fn, configs[, options])

See [the guide on writing benchmarks](../doc/guides/writing-and-running-benchmarks.md#basics-of-a-benchmark).

### default\_http\_benchmarker

The default benchmarker used to run HTTP benchmarks.
See [the guide on writing HTTP benchmarks](../doc/guides/writing-and-running-benchmarks.md#creating-an-http-benchmark).


### PORT

The default port used to run HTTP benchmarks.
See [the guide on writing HTTP benchmarks](../doc/guides/writing-and-running-benchmarks.md#creating-an-http-benchmark).

### sendResult(data)

Used in special benchmarks that can't use `createBenchmark` and the object
it returns to accomplish what they need. This function reports timing
data to the parent process (usually created by running `compare.js`, `run.js` or
`scatter.js`).

### v8ForceOptimization(method[, ...args])

Force V8 to mark the `method` for optimization with the native function
`%OptimizeFunctionOnNextCall()` and return the optimization status
after that.

It can be used to prevent the benchmark from getting disrupted by the optimizer
kicking in halfway through. However, this could result in a less effective
optimization. In general, only use it if you know what it actually does.
54 changes: 32 additions & 22 deletions doc/guides/writing-and-running-benchmarks.md
Original file line number Diff line number Diff line change
@@ -1,26 +1,34 @@
# Node.js core benchmark
# How to Write and Run Benchmarks in Node.js Core

This folder contains benchmarks to measure the performance of the Node.js APIs.

## Table of Content
## Table of Contents

* [Prerequisites](#prerequisites)
* [HTTP Benchmark Requirements](#http-benchmark-requirements)
* [Benchmark Analysis Requirements](#benchmark-analysis-requirements)
* [Running benchmarks](#running-benchmarks)
* [Running individual benchmarks](#running-individual-benchmarks)
* [Running all benchmarks](#running-all-benchmarks)
* [Comparing node versions](#comparing-node-versions)
* [Comparing parameters](#comparing-parameters)
* [Running individual benchmarks](#running-individual-benchmarks)
* [Running all benchmarks](#running-all-benchmarks)
* [Comparing Node.js versions](#comparing-nodejs-versions)
* [Comparing parameters](#comparing-parameters)
* [Creating a benchmark](#creating-a-benchmark)
* [Basics of a benchmark](#basics-of-a-benchmark)
* [Creating an HTTP benchmark](#creating-an-http-benchmark)

## Prerequisites

Basic Unix tools are required for some benchmarks.
[Git for Windows][git-for-windows] includes Git Bash and the necessary tools,
which need to be included in the global Windows `PATH`.

### HTTP Benchmark Requirements

Most of the HTTP benchmarks require a benchmarker to be installed, this can be
either [`wrk`][wrk] or [`autocannon`][autocannon].

`Autocannon` is a Node script that can be installed using
`npm install -g autocannon`. It will use the Node executable that is in the
`Autocannon` is a Node.js script that can be installed using
`npm install -g autocannon`. It will use the Node.js executable that is in the
path, hence if you want to compare two HTTP benchmark runs make sure that the
Node version in the path is not altered.
Node.js version in the path is not altered.

`wrk` may be available through your preferred package manager. If not, you can
easily build it [from source][wrk] via `make`.
Expand All @@ -34,9 +42,7 @@ benchmarker to be used by providing it as an argument, e. g.:

`node benchmark/http/simple.js benchmarker=autocannon`

Basic Unix tools are required for some benchmarks.
[Git for Windows][git-for-windows] includes Git Bash and the necessary tools,
which need to be included in the global Windows `PATH`.
### Benchmark Analysis Requirements

To analyze the results `R` should be installed. Check you package manager or
download it from https://www.r-project.org/.
Expand All @@ -50,7 +56,6 @@ install.packages("ggplot2")
install.packages("plyr")
```

### CRAN Mirror Issues
In the event you get a message that you need to select a CRAN mirror first.

You can specify a mirror by adding in the repo parameter.
Expand Down Expand Up @@ -108,7 +113,8 @@ buffers/buffer-tostring.js n=10000000 len=1024 arg=false: 3783071.1678948295
### Running all benchmarks

Similar to running individual benchmarks, a group of benchmarks can be executed
by using the `run.js` tool. Again this does not provide the statistical
by using the `run.js` tool. To see how to use this script,
run `node benchmark/run.js`. Again this does not provide the statistical
information to make any conclusions.

```console
Expand All @@ -135,18 +141,19 @@ It is possible to execute more groups by adding extra process arguments.
$ node benchmark/run.js arrays buffers
```

### Comparing node versions
### Comparing Node.js versions

To compare the effect of a new node version use the `compare.js` tool. This
To compare the effect of a new Node.js version use the `compare.js` tool. This
will run each benchmark multiple times, making it possible to calculate
statistics on the performance measures.
statistics on the performance measures. To see how to use this script,
run `node benchmark/compare.js`.

As an example on how to check for a possible performance improvement, the
[#5134](https://github.com/nodejs/node/pull/5134) pull request will be used as
an example. This pull request _claims_ to improve the performance of the
`string_decoder` module.

First build two versions of node, one from the master branch (here called
First build two versions of Node.js, one from the master branch (here called
`./node-master`) and another with the pull request applied (here called
`./node-pr-5135`).

Expand Down Expand Up @@ -219,7 +226,8 @@ It can be useful to compare the performance for different parameters, for
example to analyze the time complexity.

To do this use the `scatter.js` tool, this will run a benchmark multiple times
and generate a csv with the results.
and generate a csv with the results. To see how to use this script,
run `node benchmark/scatter.js`.

```console
$ node benchmark/scatter.js benchmark/string_decoder/string-decoder.js > scatter.csv
Expand Down Expand Up @@ -286,6 +294,8 @@ chunk encoding mean confidence.interval

## Creating a benchmark

### Basics of a benchmark

All benchmarks use the `require('../common.js')` module. This contains the
`createBenchmark(main, configs[, options])` method which will setup your
benchmark.
Expand Down Expand Up @@ -369,7 +379,7 @@ function main(conf) {
}
```

## Creating HTTP benchmark
### Creating an HTTP benchmark

The `bench` object returned by `createBenchmark` implements
`http(options, callback)` method. It can be used to run external tool to
Expand Down

0 comments on commit 5d12fd9

Please sign in to comment.