New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RDTSC does not measure cycles. #1
Comments
Thank you for pointing this out. I will update the documentation to make this clear. |
The If there's interest I can contribute a PR to add support for this on platforms supporting the |
An issue with rdpru is that it only yields the cycle counter of the core that the code is running on. So the work that's measured must be pinned to a single core for the measurement to make sense. I'm not sure if that's something the crate should do itself (with Further it appears impossible to feature-detect support for
So maybe this should be a different crate entirly. |
Yes. I agree that that would probably be a very useful feature, but probably best to put it in another crate, especially if you're right that there's no way to feature detect sup[port for it. If you make a separate crate for it, let me know and I'll put a link in the readme. |
As the subject says, RDTSC does not measure cycles. "Invariant RDTSC" became the standard after the Pentium 4, which turned RDTSC into a high resolution timer which is invariant with respect to core frequency and power states. Despite now being a high resolution timer, an RDTSC tick is not a fixed unit of time (e.g. a nanosecond) across different computers.
If you want to perform benchmark timing in cycles, there are a number of issues. First, you have to determine the conversion rate from RDTSC ticks to clock cycles. This can sometimes be queried from the operating system (which usually measures it on boot using its own timing loop) or you can measure it with your own timing loop by relying on known 1-cycle latencies for basic ALU instructions. But regardless of whether you query it from the OS or measure it from scratch, you will run into the problem that core frequency is highly variable and therefore all the measurements need to be done at a fixed core frequency, usually by forcing the processor to stay at its base frequency for both the timing loop and the benchmark itself, which requires admin privileges and is highly OS dependent.
It's tricky to do everything correctly, but the operational requirements (requiring admin to force the CPU to base frequency, the inability to do this portably on Windows, etc) are probably the biggest obstacle for a turn-key solution like you're trying to provide here. But at a minimum I don't think it's good to advertise to your users that you're measuring cycles when you're just measuring RDTSC ticks.
If you're interested in what's involved in doing this properly on Linux, you can look at how uarch-bench does it.
Here's the launcher script: https://github.com/travisdowns/uarch-bench/blob/master/uarch-bench.sh
Here's the timing calibration loop: https://github.com/travisdowns/uarch-bench/blob/master/timers.cpp
My recommendation is that you don't try to measure clock cycles outside of carefully controlled settings like uarch-bench.
The text was updated successfully, but these errors were encountered: