Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

benchmark: support for multiple http benchmarkers #8140

Closed

Conversation

bzoz
Copy link
Contributor

@bzoz bzoz commented Aug 17, 2016

Checklist
  • make -j4 test (UNIX), or vcbuild test nosign (Windows) passes
  • documentation is changed or added
  • commit message follows commit guidelines
Affected core subsystem(s)

benchmark

Description of change

Continued from #7180

Add support for multiple HTTP benchmarkers. Adds autocannon as the secondary benchmarker.

This allows for all http benchmarks to be executed under Windows. All available tools (wrk and autocannon) will be used to run HTTP benchmarks. It will fail if none of the tools is installed.

cc @nodejs/benchmarking

This adds support for multiple HTTP benchmarkers. Adds autocannon
as the secondary benchmarker.
@bzoz bzoz added windows Issues and PRs related to the Windows platform. benchmark Issues and PRs related to the benchmark subsystem. labels Aug 17, 2016
@nodejs-github-bot nodejs-github-bot added the build Issues and PRs related to build files or the CI. label Aug 17, 2016
return false;
else
return true;
};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps simplify this to just:

  return !(result.error && result.error.code === 'ENOENT');

@jasnell
Copy link
Member

jasnell commented Aug 17, 2016

Good stuff. Left a few comments. @mscdex @nodejs/benchmarking

* When running the benchmakrs, set `NODE_HTTP_BENCHMARKER` environment variable
to desired benchmarker.
* To select the default benchmarker for a particular benchmark, specify it as
`benchmarker` key (e.g. `benchmarker: 'wrk'`) in configuration passed to
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is a good idea. The options to createBenchmark should just be the benchmark parameters. If you want this feature, I think it would be much better to add an option object to bench.http.

var bench = common.createBenchmark(main, {
  num: [1, 4, 8, 16],
  size: [1, 64, 256],
  c: [100],
  benchmarker: ['wrk']
});

function main(conf) {
  bench.http({
    url: '/', 
    duration: 10,
    connections: conf.c,
    benchmarker: conf.benchmarker
  }, function () { ... });
}

@bzoz
Copy link
Contributor Author

bzoz commented Aug 18, 2016

Updated, PTAL.

I've moved the benchmarkers code to http-benchmarkers.js.

@bzoz
Copy link
Contributor Author

bzoz commented Aug 23, 2016

Updated the PR again, @AndreasMadsen PTAL

Benchmark options are now passed as object. For each benchmarker node will be restarted. By default only one benchmarker will be used.

process.exit(1);
self.report(result, elapsed);
if (cb) {
cb(0);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a little odd. I think you should make the .run signature

http_benchmarkers.run(options, function(error, results) { ... })

that way you can check the error and call the callback appropriately. Actually I don't care so much if the behaviour is the same. It is just that we should avoid calling process.exit() from more than one file, as that makes the program difficult to reason about. This way the process.exit() logic can be in common.js.

@AndreasMadsen
Copy link
Member

@bzoz Great. I think this is much better.

new WrkBenchmarker() ];

var default_http_benchmarker;
var supported_http_benchmarkers = [];
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use const

@bzoz
Copy link
Contributor Author

bzoz commented Aug 23, 2016

@AndreasMadsen updated with your suggestions

this.name = 'autocannon';
}

AutocannonBenchmarker.prototype.autocannon_exe = process.platform === 'win32'
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think strings should be put on the prototype. I would just evaluate it in the constructor.

@bzoz
Copy link
Contributor Author

bzoz commented Aug 24, 2016

Updated, PTAL

As for ENV - the --set thing will not work. If the overriden key is not it configuration already, there will be TypeError in common.js here. Not all http benchmarks use benchmarking tool (e. g. bench-parser.js), so it won't work there.

@AndreasMadsen
Copy link
Member

If the overriden key is not it configuration already, there will be TypeError in common.js here.

That we could fix, either such it defaults to a string or simply skip the property. I think the latter would be best.

@bzoz
Copy link
Contributor Author

bzoz commented Aug 26, 2016

I think it would be better to just add this as string. Otherwise I think it would be confusing for users - it would seem that sometimes it does not work. Also, no feedback when one would misspell config option.

Anyhow - I'll change common.js so that it will assume string type and drop the ENV thing.

@bzoz
Copy link
Contributor Author

bzoz commented Aug 26, 2016

Updated, PTAL

@bzoz
Copy link
Contributor Author

bzoz commented Aug 26, 2016

@AndreasMadsen BTW, why the "dont-land-on-v*.x" labels?

'instructions.'));
return;
}
var benchmarker = benchmarkers[options.benchmarker];
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use const

@AndreasMadsen
Copy link
Member

AndreasMadsen commented Aug 26, 2016

BTW, why the "dont-land-on-v*.x" labels?

Some major changes to the benchmark suite has been made, this PR depends on those changes and thus it can't land on v6 or earlier. See #7890

@AndreasMadsen
Copy link
Member

AndreasMadsen commented Aug 26, 2016

I don't want to hold this any more and I don't have this much time to review code. So except for the minor const and let issues, it LGTM.

The current version is much better than original, however I still think there are some very implicit things that I don't like (but can tolerate). What is implicit:

  • adding benchmark as an config option when bench.http is executed.
    • this is added such the used benchmarker is shown in the output.
  • --set benchmarker= also has an effect when there is no explicit benchmarker option.
    • this is to support --set benchmarker when there is no explicit benchmarker option.
  • misspelled options are not shown in the output or options object just the extra_options object.
    • I know I sugested they should just be ignored, I was wrong.

I think all this can be solved by just enforcing explicitly setting the benchmarker parameter in the createBenchmark options object.

var bench = common.createBenchmark(main, {
  num: [1, 4, 8, 16],
  size: [1, 64, 256],
  c: [100],
  benchmarker: bench.default_http_benchmarker
});

function main(conf) {
  bench.http({
    url: '/', 
    duration: 10,
    connections: conf.c,
    benchmarker: conf.benchmarker
  }, function () { ... });
}

This way the benchmarker is added to the output and no special logic is needed for --set benchmarker.

Yes, if --set benchmarker is used, benchmarker= will be also be added to the output of benchmarks that doesn't use it. I don't think this is a big issue. If it turns out to be an issue, we can filter the output to only show options passed explicitly to createBenchmark. But this is a tradeoff between readability and transparency.

@bzoz
Copy link
Contributor Author

bzoz commented Aug 26, 2016

I would like to keep it the way it is now - without explicitly adding default_http_benchmarker and with proper output for other benchmarks. I've updated that var/let/const thingy.

Thanks for all the suggestions!

@AndreasMadsen
Copy link
Member

/cc @jasnell @mscdex @mcollina, please review

@AndreasMadsen
Copy link
Member

Let's also cc @nodejs/collaborators as this is a fairly big addition and someone might have stronger opinions.

@mcollina
Copy link
Member

LGTM

@bzoz
Copy link
Contributor Author

bzoz commented Aug 29, 2016

Any more opinions?

If not, I would like to land this tomorrow.

@AndreasMadsen
Copy link
Member

Maybe you shouldn't put me under Reviewed-By since I'm not 100% onboard. I don't know what the policy is.

@mcollina
Copy link
Member

@AndreasMadsen can you please recap why you are not onboard? Just to understand for someone jumping into this later on.

@AndreasMadsen
Copy link
Member

AndreasMadsen commented Aug 29, 2016

@mcollina I've covered that in #8140 (comment). tl;dr: there are a few implicit things which I think are confusion/surprising, and they could become explicit with minimal effort.

@mcollina
Copy link
Member

can we get you to to agree, or some other opinion?

@AndreasMadsen
Copy link
Member

AndreasMadsen commented Aug 29, 2016

@mcollina I do these reviews in my spare time. I'm sure we could agree if given enough time, but my time is quite limited these days (exams, assignments, etc.).

edit: I hope you don't take it the wrong way. I can understand that someone suddenly deciding not to spend more spare time on someone else can be offensive. I had reasonable time two weeks ago, but very little these weeks. I'm sorry.

bzoz added a commit that referenced this pull request Aug 31, 2016
This adds support for multiple HTTP benchmarkers. Adds autocannon
as the secondary benchmarker.

PR-URL: #8140
Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
@bzoz
Copy link
Contributor Author

bzoz commented Aug 31, 2016

CI: https://ci.nodejs.org/job/node-test-pull-request/3903/ (failures unrealated)

Landed in b1bbc68

@AndreasMadsen I did as you asked. Anyhow, thanks for all your input! It was very helpful in improving the quality of this PR.

@bzoz bzoz closed this Aug 31, 2016
@gibfahn gibfahn mentioned this pull request Jun 15, 2017
3 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
benchmark Issues and PRs related to the benchmark subsystem. build Issues and PRs related to build files or the CI. windows Issues and PRs related to the Windows platform.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants