Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

benchmark.cc is using non portable high precision timer #259

Closed
ruipires opened this issue Sep 18, 2014 · 4 comments
Closed

benchmark.cc is using non portable high precision timer #259

ruipires opened this issue Sep 18, 2014 · 4 comments

Comments

@ruipires
Copy link

@ruipires ruipires commented Sep 18, 2014

Currently benchmark.cc is calling clock_gettime with CLOCK_PROCESS_CPUTIME_ID as the first parameter.
This is not available in Mac OSX (or windows AFAIK), and results in failure to compile.

I'm considering sending a pull request, but am not sure how exactly to fix this.

Using #ifdefs to handle different platforms is something I tend to avoid at all costs.
Considering c++11 is an option here, using std::chrono::high_resolution_timer could help, but that should measure wall time and not cpu time.
std::clock could be a cross platform way to implement this, but seems to have less resolution than clock_gettime with CLOCK_PROCESS_CPUTIME_ID.

@craigbarnes
Copy link
Contributor

@craigbarnes craigbarnes commented Sep 20, 2014

Wouldn't clock() / CLOCKS_PER_SEC be good enough for this kind of benchmark? It's by far the most portable way to measure duration (it's C89) and it's quirks only apply when timing I/O, which is all done before the timer starts in benchmark.cc.

@nostrademons
Copy link
Contributor

@nostrademons nostrademons commented Sep 21, 2014

I like the clock() / CLOCKS_PER_SEC approach. The reason I didn't use it originally was:

  1. I was worried about picking up variance through I/O, yielding to other processes, etc.
  2. I wanted higher resolution than millisecond.

But reading the spec for clock(), it seems like it's processor time for this process only, and on modern systems gives microsecond resolution. And considering that I can't seem to get numbers more accurate than about +-200ms on my laptop because of Turbo Boost, I think that the desire for more accuracy may not be worth it, particularly at the cost of portability.

@nostrademons
Copy link
Contributor

@nostrademons nostrademons commented Sep 22, 2014

Reproduced on Travis CI, the pull request fixed it.

@ruipires
Copy link
Author

@ruipires ruipires commented Sep 22, 2014

👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
3 participants