Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP

Loading…

prototype benchmark command #849

Closed
wants to merge 3 commits into from

3 participants

@caridy
Owner

THIS IS AN EXPERIMENTAL COMMAND

This command enable us to write benchmark tests at the app level. It uses benchmark and benchtable in conjunction with YUI to facilitate such as tests. Also, any mojito or app level yui module is available for benchmarking without any extra configuration.

Steps:

  • npm i benchmark -g
  • npm i benchtable -g
  • mojito create app demo
  • cd demo
  • mkdir benchmarks
  • touch benchmarks/foo-benchmark.js
  • do your stuff on the new benchmark file
  • mojito benchmark ./benchmarks/foo-benchmark.js

Here is an example of a benchmark file:

YUI.add('strings-concat-benchmark', function (Y) {

    var suite = Y.BenchmarkSuite;

    suite.add('concat', function () {
        var b = 'foo' + 'bar';
    });

    suite.add('array join', function () {
        var b = ['foo', 'bar'].join('');
    });

}, '0.1', {requires: ['mojito-benchmark']});

To execute the test, you can do this:

$ mojito benchmark benchmarks/strings-concat-benchmark.js 
Benchmarking YUI module strings-concat-benchmark [./demo/benchmarks/strings-concat-benchmark.js]
Starting benchmarks.
concat x 168,013,961 ops/sec ±44.33% (28 runs sampled)
array join x 20,195,411 ops/sec ±2.09% (94 runs sampled)
⚠ Fastest is concat

Here is another example of the output after executing YUI Base benchmark (from YUI's source) to compare different base implementations:

$ mojito benchmark benchmarks/base-benchmark.js 
Benchmarking YUI module base-benchmark [./demo/benchmarks/base-benchmark.js]
Starting benchmarks.
Base x 6,573 ops/sec ±4.01% (86 runs sampled)
MyBase x 5,893 ops/sec ±3.35% (89 runs sampled)
MyBase with 10 simple value attributes x 4,793 ops/sec ±2.09% (93 runs sampled)
MyBase with 20 varied attributes x 2,792 ops/sec ±1.84% (93 runs sampled)
BaseCore x 31,725 ops/sec ±3.09% (89 runs sampled)
MyBaseCore x 24,552 ops/sec ±6.80% (76 runs sampled)
MyBaseCore with 10 simple value attributes x 8,977 ops/sec ±3.97% (84 runs sampled)
MyBaseCore with 20 varied attributes x 4,606 ops/sec ±3.06% (90 runs sampled)
⚠ Fastest is BaseCore

When it comes to benchtable, you can use different datasets to benchmark a functionality with different data. Here is the output of a benchmark table testing strings concatenation with short and long strings:

$ mojito benchmark benchmarks/strings-concat-benchtable.js 
Benchmarking YUI module strings-concat-benchtable [./demo/benchmarks/strings-concat-benchtable.js]
Starting benchmarks.
concat for params small strings (3 characters) x 12,273,020 ops/sec ±0.88% (88 runs sampled)
concat for params big strings (1025 characters) x 11,763,540 ops/sec ±1.10% (99 runs sampled)
array join for params small strings (3 characters) x 7,447,580 ops/sec ±2.63% (94 runs sampled)
array join for params big strings (1025 characters) x 958,773 ops/sec ±0.77% (100 runs sampled)
⚠ Fastest is concat for params small strings (3 characters)
+------------+------------------------------+-------------------------------+
|            | small strings (3 characters) | big strings (1025 characters) |
+------------+------------------------------+-------------------------------+
| concat     | 12,273,020 ops/sec           | 11,763,540 ops/sec            |
+------------+------------------------------+-------------------------------+
| array join | 7,447,580 ops/sec            | 958,773 ops/sec               |
+------------+------------------------------+-------------------------------+

As you can see you can test multiple dataset with each implementation.

GIST with the examples:

TODO:

  • define dependencies benchmark@1.0.x and benchtable@0.0.x
  • evaluate if the store can identify benchmark files
  • evaluate moving this infrastructure into its own package called mojito-benchmark.
  • verify benchmark (bz #5988048) and benchtable (bz #5988047) as ynpm pkgs.
@mojit0

+1

@drewfish
Owner

A few notes:

  • It looks like the benchmark filename has to be the same name as the YUI module name. This should be explicitly noted in the documentation.

  • You should probably add an on('error') handler to show errors, otherwise they get swallowed and the user has no idea what went wrong with the benchmark.

  • I want to compare different versions of an operation. However, I don't want to do the setup in each iteration. There's no preCycle event (or the equivalent). (For example, I want to compare different versions of store.expandInstanceForEnv(), but the only way to do that is to put them all in different benchmark files, since there's currently no preCyle to do things like store.expandInstanceForEnv = function() {...}.)

@caridy
Owner
  • error handling was addressed:
$ mojito benchmark benchmarks/strings-concat-benchmark.js 
Benchmarking YUI module strings-concat-benchmark [./demo/benchmarks/strings-concat-benchmark.js]
Starting benchmarks.
concat x 142,256,065 ops/sec ±44.61% (24 runs sampled)
✖ array join: ReferenceError: g is not defined
⚠ Fastest is concat
  • Usage doc was expanded to notice 3 important things:
NOTES:
  * The name of the yui module that defines the benchmark test should
    match the filename. In the first example, the test should be defined
    as `YUI.add("foo-benchmark", function (Y) {/*...*/});`, otherwise
    the test will fail.
  * Any yui module from yui core, mojito core or application level module
    that runs on the server runtime could be required as part of the
    "requires" array in the benchmark test without any extra configuration.
  * If you want to require a yui module that is meant to run in the client
    runtime, make sure you specify the proper --context option.

About comparing different implementations, the recommended way will be to have store.expandInstanceForEnv and store.expandInstanceForEnvAlt in your code, so, you can do the same you do in the original, plus some enhancements, and see how it goes. I will recommend not to mock things or write actual code in the tests directly if the intent if to test a feature that already exists.

/cc @drewfish

@drewfish
Owner

+1

@caridy
Owner

We will put this on hold in favor of "mojito-benchmak" npm package.

@caridy
Owner

@isao whenever you get a chance, we should revisit this now that we have mojito-cli

@caridy
Owner

very old PR, closing.

@caridy caridy closed this
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
This page is out of date. Refresh to see the latest.
Showing with 209 additions and 0 deletions.
  1. +209 −0 lib/app/commands/benchmark.js
View
209 lib/app/commands/benchmark.js
@@ -0,0 +1,209 @@
+/*jslint node:true, nomen:true, stupid: true */
+'use strict';
+
+var libpath = require('path'),
+ libfs = require('fs'),
+ existsSync = libfs.existsSync || libpath.existsSync,
+
+ BASE = libpath.resolve(__dirname, '../../../') + '/',
+ CWD = process.cwd(),
+
+ YUIFactory = require(BASE + 'lib/yui-sandbox'),
+ Store = require(BASE + 'lib/store'),
+ util = require(BASE + 'lib/management/utils'),
+
+ Benchmark = require('benchmark').Benchmark,
+ Benchtable = require('benchtable');
+
+function getYUIInstance(store, testName) {
+
+ var YUI,
+ Y,
+ mojits = store.yui.getConfigAllMojits('server', {}),
+ shared = store.yui.getConfigShared('server', {}, false),
+ modules;
+
+ YUI = YUIFactory.getYUI();
+
+ /*
+ * this synthetic module defines two ways to do
+ * benchmarking, by using the row implementation,
+ * or the table implementation which allows us to
+ * do more advanced things with multiple datasets.
+ * Only one of them should be used by a benchmark
+ * test.
+ */
+ YUI.add('mojito-benchmark', function (Y, NAME) {
+
+ var suite,
+ suiteTable;
+
+ // enabling benchmark suite
+
+ suite = Y.BenchmarkSuite = Benchmark.Suite(testName);
+
+ suite.on('start', function () {
+ util.log('Starting benchmarks.');
+ });
+
+ suite.on('cycle', function (event) {
+ if (!event.target.error) {
+ util.log(String(event.target));
+ }
+ });
+
+ suite.on('error', function (event) {
+ util.error(String(event.target) + String(event.target.error));
+ });
+
+ suite.on('complete', function (event) {
+ util.warn('Fastest is ' + this.filter('fastest').pluck('name'));
+ });
+
+ // enabling benchtable suite
+
+ suiteTable = Y.BenchtableSuite = new Benchtable(testName);
+
+ suiteTable.on('start', function () {
+ util.log('Starting benchmarks.');
+ });
+
+ suiteTable.on('cycle', function (event) {
+ if (!event.target.error) {
+ util.log(String(event.target));
+ }
+ });
+
+ suiteTable.on('error', function (event) {
+ util.error(String(event.target) + String(event.target.error));
+ });
+
+ suiteTable.on('complete', function (event) {
+ util.warn('Fastest is ' + this.filter('fastest').pluck('name'));
+ util.log(this.table.toString());
+ });
+
+ });
+
+ Y = YUI({
+ useSync: true
+ });
+
+ modules = Y.merge((mojits.modules || {}), (shared.modules || {}));
+
+ Y.applyConfig({
+ modules: modules
+ });
+
+ return Y;
+
+}
+
+/**
+ * Standard run method hook export.
+ * @method run
+ * @param {Array} args Trailing cli arguments passed to cli.js
+ * @param {Object} opts Parsed cli options like --context (see exports.options)
+ * @param {Function} cb callback to cli.js, takes string parameter for errors
+ */
+function run(args, opts, cb) {
+
+ var csvctx = util.contextCsvToObject, // shortcut
+ store,
+ conf = {
+ modules: {}
+ },
+ file = args[0] && libpath.resolve(CWD, args[0]),
+ name = file && libpath.basename(file, '.js'),
+ Y;
+
+ function die(err) {
+ cb(err, exports.usage, true);
+ }
+
+ if (!util.isMojitoApp(CWD)) {
+ die('Not a Mojito directory');
+ }
+
+ if (!file || !existsSync(file)) {
+ die('Invalid argument with the path to the benchmark script: ' + file);
+ }
+
+ // hash a cli context string like 'device:iphone,environment:test'
+ opts.context = typeof opts.context === 'string' ? csvctx(opts.context) : {};
+
+ // init resource store
+ store = Store.createStore({
+ root: CWD,
+ context: opts.context
+ });
+
+ // normalize inputs
+ Y = getYUIInstance(store, name);
+
+ util.log('Benchmarking YUI module ' + name + ' [' + file + ']');
+
+ conf.modules[name] = {
+ fullpath: file
+ };
+
+ Y.applyConfig(conf);
+ Y.use(name);
+
+ if (Y.BenchmarkSuite) {
+ try {
+ Y.BenchmarkSuite.run();
+ Y.BenchtableSuite.run();
+ } catch (e) {
+ die('Internal error while executing the benchmark module: ' +
+ name + '\n' + e);
+ }
+ } else {
+ die('Invalid benchmark module: ' + name);
+ }
+
+}
+
+/**
+ * Standard usage string export.
+ */
+exports.usage = [
+ 'mojito benchmark {file} [options]',
+ '',
+ 'Example: mojito benchmark ./benchmark/foo-benchmark.js',
+ " (execute a global benchmark test)",
+ '',
+ 'Example: mojito benchmark ./mojit/bar/tests/benchmark/bar-benchmark.js',
+ ' (execute a mojit benchmark test)',
+ '',
+ 'Example: mojito benchmark baz.js --context environment:development',
+ ' (execute a custom benchmark test with a custom context)',
+ '',
+ 'NOTES:',
+ ' * The name of the yui module that defines the benchmark test should',
+ ' match the filename. In the first example, the test should be defined',
+ ' as `YUI.add("foo-benchmark", function (Y) {/*...*/});`, otherwise',
+ ' the test will fail.',
+ ' * Any yui module from yui core, mojito core or application level module',
+ ' that runs on the server runtime could be required as part of the',
+ ' "requires" array in the benchmark test without any extra configuration.',
+ ' * If you want to require a yui module that is meant to run in the client',
+ ' runtime, make sure you specify the proper --context option.',
+ '',
+ 'OPTIONS: ',
+ ' --context [string] A comma-separated list of key:value pairs',
+ ' that define the base context used to read',
+ ' configuration files'].join("\n");
+
+/**
+ * Standard options list export.
+ */
+exports.options = [
+ {
+ longName: 'context',
+ shortName: null,
+ hasValue: true
+ }
+];
+
+exports.run = run;
Something went wrong with that request. Please try again.