diff --git a/README.md b/README.md index ab8a0dc6d62..8aa496a251f 100644 --- a/README.md +++ b/README.md @@ -77,7 +77,7 @@ on NPM. ## Benchmarks -[Benchmark tool](https://tensorflow.github.io/tfjs/e2e/benchmarks/index.html). Use this webpage tool to test the performance related metrics (speed, memory, power, etc) of TensorFlow.js models on your local device with CPU, WebGL or WASM backend. You can benchmark custom models following this [guide](https://github.com/tensorflow/tfjs/e2e/benchmarks/README.md). +[Benchmark tool](https://tensorflow.github.io/tfjs/e2e/benchmarks/local-benchmark/index.html). Use this webpage tool to test the performance related metrics (speed, memory, power, etc) of TensorFlow.js models on your local device with CPU, WebGL or WASM backend. You can benchmark custom models following this [guide](https://github.com/tensorflow/tfjs/blob/master/e2e/benchmarks/local-benchmark/README.md). ## Getting started diff --git a/e2e/benchmarks/SpecRunner.html b/e2e/benchmarks/SpecRunner.html index 1cf56321d64..8e1dc4fbfba 100644 --- a/e2e/benchmarks/SpecRunner.html +++ b/e2e/benchmarks/SpecRunner.html @@ -25,23 +25,18 @@ - - - - - diff --git a/e2e/benchmarks/benchmark_util.js b/e2e/benchmarks/benchmark_util.js index 65fd5ad5808..1b49c3e1901 100644 --- a/e2e/benchmarks/benchmark_util.js +++ b/e2e/benchmarks/benchmark_util.js @@ -15,6 +15,11 @@ * ============================================================================= */ +/** + * This tool depends on tf-core, tf-layers, tf-converter and the backends + * (tf-backend-cpu, tf-backend-webgl or tf-backend-wasm) that you would use. + */ + /** * Generates a random input for `model`, based on `model.inputs`. For * tf.GraphModel, `NamedTensorMap` input will be returned; otherwise, diff --git a/e2e/benchmarks/benchmark_util_test.js b/e2e/benchmarks/benchmark_util_test.js index 8e42d377a67..76af0f4b488 100644 --- a/e2e/benchmarks/benchmark_util_test.js +++ b/e2e/benchmarks/benchmark_util_test.js @@ -16,7 +16,7 @@ */ /** - * The unit tests in this file can be run by opening `SpecRunner.html` in + * The unit tests in this file can be run by opening `./SpecRunner.html` in * browser. */ diff --git a/e2e/benchmarks/browserstack-benchmark/app_test.js b/e2e/benchmarks/browserstack-benchmark/app_test.js index 258b6468646..6ffa6dc5328 100644 --- a/e2e/benchmarks/browserstack-benchmark/app_test.js +++ b/e2e/benchmarks/browserstack-benchmark/app_test.js @@ -15,6 +15,11 @@ * ============================================================================= */ +/** + * The unit tests in this file can be run by opening `./SpecRunner.html` in + * browser. + */ + describe('benchmark multiple browsers', () => { const browsersList = [ { diff --git a/e2e/benchmarks/browserstack-benchmark/index_test.js b/e2e/benchmarks/browserstack-benchmark/index_test.js index 0bbe6f3f045..793e730c766 100644 --- a/e2e/benchmarks/browserstack-benchmark/index_test.js +++ b/e2e/benchmarks/browserstack-benchmark/index_test.js @@ -16,7 +16,7 @@ */ /** - * The unit tests in this file can be run by opening `SpecRunner.html` in + * The unit tests in this file can be run by opening `./SpecRunner.html` in * browser. */ diff --git a/e2e/benchmarks/README.md b/e2e/benchmarks/local-benchmark/README.md similarity index 88% rename from e2e/benchmarks/README.md rename to e2e/benchmarks/local-benchmark/README.md index e57ffeae5ed..d5df2f5bdd5 100644 --- a/e2e/benchmarks/README.md +++ b/e2e/benchmarks/local-benchmark/README.md @@ -1,14 +1,14 @@ # Benchmark custom models -The `custom model` in the [benchmark tool](https://tensorflow.github.io/tfjs/e2e/benchmarks/index.html) currently only supports `tf.GraphModel` or `tf.LayersModel`. +The `custom model` in the [benchmark tool](https://tensorflow.github.io/tfjs/e2e/benchmarks/local-benchmark/index.html) currently only supports `tf.GraphModel` or `tf.LayersModel`. If you want to benchmark models in other types or customize the inputs for model inference, you need to implement `load` and `predictFunc` methods, following this [example PR](https://github.com/tensorflow/tfjs/pull/3168/files). ## Models in local file system If you have a model in local file system, you can follow the steps below: 1. Download [tfjs repository](https://github.com/tensorflow/tfjs.git). -2. Put your `model.json` file and the weight files under the `tfjs/e2e/benchmarks/` folder. -3. Under the `tfjs/e2e/benchmarks/` folder, run `npx http-server`. +2. Put your `model.json` file and the weight files under the `tfjs/e2e/benchmarks/local-benchmark/` folder. +3. Under the `tfjs/e2e/benchmarks/local-benchmark/` folder, run `npx http-server`. 4. Open the browser go to `http://127.0.0.1:8080/`, this will open the benchmark tool. Then populate the `model.json` url to `modelUrl` under the custom model, which is `http://127.0.0.1:8080/model.json`. In addition, if the online tool is blocked by `CORS` problems when fetching the custom model, you can locally serve the model by the above steps to solve this problem. diff --git a/e2e/benchmarks/local-benchmark/SpecRunner.html b/e2e/benchmarks/local-benchmark/SpecRunner.html new file mode 100644 index 00000000000..4a57886f99c --- /dev/null +++ b/e2e/benchmarks/local-benchmark/SpecRunner.html @@ -0,0 +1,48 @@ + + + + +
+ +