New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Explanation of the benchmark results #397
Comments
Because the benchmark is tested a couple of times with different iteration counts to determine the correct count to use. After it finds an iteration count that causes the benchmark to run for a significant enough period of time it reports only that run, and not the test runs before it. Resetting
Time: The average wall time per iteration. CPU: The average CPU time per iteration. By default this clock is used when determining the amount of iterations to run. When you sleep your process it is no longer accumulating CPU time. I'm assuming Iterations: The number of iterations the benchmark ran. (See https://github.com/google/benchmark#controlling-number-of-iterations) Perhaps somebody should add basic documentation about the output information. |
I do some initialization before the loop.
I get sometimes following results
You see that the CPU time is in one case higher. How can this be? |
1. no it won't. and if you want to do some reinitialization in the loop you
can use PauseTiming and ResumeTiming.
2. because slowFuncReturn is slower?
Dominic Hamon | Google
*There are no bad ideas; only good ideas that go horribly wrong.*
…On Wed, Jun 7, 2017 at 6:09 AM, zack-snyder ***@***.***> wrote:
1.
I do some initialization before the loop.
Will this also be included in the measurement?
Something like this:
static void BM_somefunc(benchmark::State& state)
{
auto foo = do_some_init();
while (state.KeepRunning())
{
do_some_calculation(foo);
}
}
1.
I get sometimes following results
---------------------------------------------------------
Benchmark Time CPU Iterations
---------------------------------------------------------
BM_fastFuncReturn 16327 ns 16392 ns 44800
BM_slowFuncReturn 17499 ns 16881 ns 40727
You see that the CPU time is in one case higher. How can this be?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#397 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAfIMmr0LoZi4vo2Bw93pmRdoXesTgiIks5sBqEjgaJpZM4NuTrF>
.
|
How can the CPU timer higher, then the wall clock time? |
I believe there might be a few ways. The most obvious is if your test uses multiple threads |
no, there are no multiple threads involved |
I feel the basic documentation is incredibly sparse and not very detailed about how any of the functions and macros work, their methodology or reasons why they are doing what they're doing. Coming to it afresh having not studied it before, I find it almost impenetrable. I would recommend a full review of the documentation as it stands. |
are you sure |
In linux, it seems to use see src/timers.cc line 130 and 174 |
Regarding documentation, it's a problem or knowledge. So i think it would be useful if someone could actually comply the list of topics |
https://pythonspeed.com/articles/blocking-cpu-or-io/ time and cpu time. May be help you |
closing this out as a specific issue, but i remain open to anyone who wants to enhance the documentation further. |
Is there an explanation of the standard benchmark table results?
What is this exactly, and didn't find a clear explanation of this in the documentation.
2)
I put a sleep(100) into a while loop with some code.
For unknown reason we got 0ns at the CPU column.
Although there were some code which does some calculation.
How big is the time out of all iterations?
He printed till 111 and said iterations: 100
Why?
The text was updated successfully, but these errors were encountered: