Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Random interleaving of benchmark repetitions - the sequel (fixes #1051) #1163

Merged
merged 1 commit into from
Jun 3, 2021

Conversation

LebedevRI
Copy link
Collaborator

@LebedevRI LebedevRI commented Jun 1, 2021

Inspired by the original implementation by Hai Huang @haih-g
from #1105.

The original implementation had design deficiencies that
weren't really addressable without redesign, so it was reverted.

In essence, the original implementation consisted of two separateable parts:

  • reducing the amount time each repetition is run for, and symmetrically increasing repetition count
  • running the repetitions in random order

While it worked fine for the usual case, it broke down when user would specify repetitions
(it would completely ignore that request), or specified per-repetition min time (while it would
still adjust the repetition count, it would not adjust the per-repetition time,
leading to much greater run times)

Here, like i was originally suggesting in the original review, i'm separating the features,
and only dealing with a single one - running repetitions in random order.

Now that the runs/repetitions are no longer in-order, the tooling may wish to sort the output,
and indeed compare.py has been updated to do that: #1168.

@LebedevRI LebedevRI requested a review from dmah42 June 1, 2021 19:38
@google-cla google-cla bot added the cla: yes label Jun 1, 2021
@haihuang-ml
Copy link
Contributor

@LebedevRI Thanks for doing this. You may also want to include the follow test cases from test/benchmark_random_interleaving_gtest.cc

TEST_F(BenchmarkTest, Match1) {...}
TEST_F(BenchmarkTest, Match1WithRepetition) {...}
TEST_F(BenchmarkTest, Match1WithRandomInterleaving) {...}

src/benchmark.cc Outdated Show resolved Hide resolved
src/benchmark_runner.cc Outdated Show resolved Hide resolved
i.results = manager->results;
}
void BenchmarkRunner::do_one_repetition() {
assert(has_repeats_remaining() && "Already done all repetitions?");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CHECK?

src/benchmark_runner.cc Outdated Show resolved Hide resolved
src/benchmark_runner.cc Outdated Show resolved Hide resolved
src/benchmark_runner.h Outdated Show resolved Hide resolved
src/benchmark.cc Outdated Show resolved Hide resolved
…le#1051)

Inspired by the original implementation by Hai Huang @haih-g
from google#1105.

The original implementation had design deficiencies that
weren't really addressable without redesign, so it was reverted.

In essence, the original implementation consisted of two separateable parts:
* reducing the amount time each repetition is run for, and symmetrically increasing repetition count
* running the repetitions in random order

While it worked fine for the usual case, it broke down when user would specify repetitions
(it would completely ignore that request), or specified per-repetition min time (while it would
still adjust the repetition count, it would not adjust the per-repetition time,
leading to much greater run times)

Here, like i was originally suggesting in the original review, i'm separating the features,
and only dealing with a single one - running repetitions in random order.

Now that the runs/repetitions are no longer in-order, the tooling may wish to sort the output,
and indeed `compare.py` has been updated to do that: google#1168.
@LebedevRI
Copy link
Collaborator Author

Thank you for the review!

@LebedevRI LebedevRI merged commit fbc3140 into google:main Jun 3, 2021
@LebedevRI LebedevRI deleted the interleaving-the-sequel branch June 3, 2021 18:16
vincenzopalazzo pushed a commit to vincenzopalazzo/benchmark that referenced this pull request Feb 8, 2022
…le#1051) (google#1163)

Inspired by the original implementation by Hai Huang @haih-g
from google#1105.

The original implementation had design deficiencies that
weren't really addressable without redesign, so it was reverted.

In essence, the original implementation consisted of two separateable parts:
* reducing the amount time each repetition is run for, and symmetrically increasing repetition count
* running the repetitions in random order

While it worked fine for the usual case, it broke down when user would specify repetitions
(it would completely ignore that request), or specified per-repetition min time (while it would
still adjust the repetition count, it would not adjust the per-repetition time,
leading to much greater run times)

Here, like i was originally suggesting in the original review, i'm separating the features,
and only dealing with a single one - running repetitions in random order.

Now that the runs/repetitions are no longer in-order, the tooling may wish to sort the output,
and indeed `compare.py` has been updated to do that: google#1168.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants