-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change timing internals #86
Conversation
"log.cc" "sleep.cc" "string_util.cc" "sysinfo.cc" | ||
"walltime.cc") | ||
set(SOURCE_FILES "benchmark.cc" "minimal_benchmark.cc" "commandlineflags.cc" | ||
"colorprint.cc" "log.cc" "sleep.cc" "string_util.cc" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
colorprint before commandlineflags
minimal_benchmark after log
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
} | ||
|
||
void Initialize(int* argc, const char** argv) { | ||
internal::ParseCommandLineFlags(argc, argv); | ||
ParseCommandLineFlags(argc, argv); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
put this back under internal please
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ack.
// Returns true iff the benchmark should continue through another iteration. | ||
bool KeepRunning(); | ||
BENCHMARK_ALWAYS_INLINE | ||
bool KeepRunning() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you please try this same change but with the implementation kept private? i'll be amazed if the overhead of the function call matches the inlined version, and the benefit to moving it is a simpler API with fewer internals exposed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've done the legwork. Here are some of the things I noticed.
- There was no difference when building in debug mode.
- There was a difference when building in release. I saw a 5% change on some benchmarks in basic_test. This may just be noise.
The one thing I like about moving KeepRunning()
out of the header is that when the user compiles their tests with two different optimization levels the amount of overhead KeepRunning()
introduces doesn't change.
However I also worry about introducing any amount of noise into micro benchmarks. I like the idea of keeping it in the header and letting the compiler decide what is best.
I think we can still expose fewer internals by moving some of the State
methods out of the header. I'll make that change and not move KeepRunning()
for now. I'll continue to look into this though.
} // end namespace | ||
|
||
|
||
struct ThreadStats { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this could be in the anonymous namespace, right? as could TimerManager?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep. Done.
// expansion should be 14x. | ||
bool is_significant = (seconds / FLAGS_benchmark_min_time) > 0.1; | ||
multiplier = is_significant ? multiplier : std::min(10.0, multiplier); | ||
// TODO(ericwf) I don't think this branch is reachable. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it is.
min_time = 0 -> multiplier = 0.0 and is_significant = true (because XX/0 is > 0.1)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should min_time=0 be allowed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sure.. it means "i don't care how quickly this runs" :)
int output_width = | ||
fprintf(stdout, | ||
"%s%-*s %10s %10s %10s\n", | ||
Prefix(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i thought you were going to remove this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pushing the commit helps...
@@ -153,14 +154,18 @@ void Initialize(int* argc, const char** argv); | |||
|
|||
// Otherwise, run all benchmarks specified by the --benchmark_filter flag, | |||
// and exit after running the benchmarks. | |||
void RunSpecifiedBenchmarks(const BenchmarkReporter* reporter = nullptr); | |||
void RunSpecifiedBenchmarks(const BenchmarkReporter* reporter = NULL); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you leave the nullptr stuff alone? or does it conflict with standard library benchmarking?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried to leave it alone inside the library source. However I would like to keep the headers compiling in C++03 so I can benchmark stuff in C++03.
Looks good |
Merged. |
Finally upgrade the timing internals!
The problem is the current version is that checking the time every iteration of the benchmark introduces a large amount of jitter and overhead into the timings.
This version fixes that problem by only checking the time at the beginning and end of every benchmark run.
It uses an iteration counter in state.KeepRunning() to track the number of times to run the benchmarks.