Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmark runner refactoring and interleaved benchmark sampling #647

Merged
merged 11 commits into from
Jul 26, 2018

Conversation

pv
Copy link
Collaborator

@pv pv commented Jun 2, 2018

Refactor benchmark runner by decoupling preparation/running/result printing steps from each other.

Also enable interleaved sampling runs, i.e. each benchmark run is split to multiple parts and these are run in an interleaved order. This enables each benchmark to sample over long-time variations in CPU and other background factors.

@jbrockmendel
Copy link
Contributor

This enables each benchmark to sample over long-time variations in CPU and other background factors.

Is this related to the issue discussed in #595?

@pv
Copy link
Collaborator Author

pv commented Jul 7, 2018

@jbrockmendel: yes, although it does not interleave between different commits.
That could in principle be possible to add though (and easier to do after the refactoring here).

@pv pv mentioned this pull request Jul 8, 2018
1 task
@pv pv changed the title WIP: benchmark runner refactoring and interleaved benchmark sampling Benchmark runner refactoring and interleaved benchmark sampling Jul 24, 2018
pv added 4 commits July 24, 2018 22:37
Fix some exceptions raised in the test benchmark suite, and fix the code
checking for functionality of number/repeat
Instead of doing benchmark planning/running/printing result in one go,
decouple these steps from each other.

Before running, construct a list of Job objects to run.  Then run them
one by one, printing the results.

Splitting benchmark sample collection to multiple processes for
interleaved runs is done in the planning step.

The interleaved benchmark runs (i.e. sampling split to multiple parts,
run in an interleaved order) ensure that all benchmarks can sample over
long-time variations in CPU and other background environmental factors.
pv added 7 commits July 25, 2018 00:27
…when running

Add --attribute/-a option for providing override values for benchmark
attributes in Run/Dev/Continuous.  Useful e.g. with timing benchmarks.

Only attributes with base-type values can be overridden, since they are
passed as json-serialized.
Minimum at 5 samples per process enables people to scale up more easily
just by increasing `processes` without having to set `repeat` at the
same time.
@pv pv merged commit b4a13c3 into airspeed-velocity:master Jul 26, 2018
@pv pv deleted the many-proc-refactor-2 branch August 19, 2018 20:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants