Conversation
On some linux kernels, e.g. x86_64 kernels, clock_gettime() has poor resolution (1ms on my 2.6.33 kernel). If clock_getres() returns a resolution poorer than we expect from gettimeofday() (microseconds) then use geittimeofday() instead. Also fix a casting bug that would make to!("seconds", float) truncate unnecessarily.
Should work now if clock_gettime is not defined at all.
I think that either I grossly misunderstand clock_getres() or it is just wrong. Because I am definitely getting better than 999,848 nanosecond resolution. When I print the output of clock_getres(CLOCK_MONOTONIC) I get {0, 999848}. But when I print the difference between two successive calls to clock_gettime(CLOCK_MONOTONIC) I get 698 nanoseconds. When I run the same experiment on Ubuntu's 2.6.38-generic kernel, also on x86_64, I get 1 nanosecond resolution from clock_getres() and the two successive calls to clock_gettime() give a difference of 217 nanoseconds. So it's not x86_64 per se, but something that was fixed since 2.6.33, or something about Centos kernels, or ...?
I can rebase into a new pull request with a single commit if that's preferable. |
On further thought, maybe this should unconditionally be nanoseconds when using clock_gettime(). This avoids unnecessary confusion and rounding errors during conversions. It doesn't seem like ticksPerSec is meant to be a definitive answer on clock resolution, just the time base for calculating conversions. |
I'm not at all comfortable with setting And even if Changing std.datetime so that it uses As for your comment about misunderstanding In any case, I think that you need to come up with examples of |
Thanks for looking at this. I'm sorry for the confusing series of diffs. I came to understand what's going on much better last night. Let me explain.
To get back to the beginning, I noticed this problem because Here's my C benchmark: clock_getres(CLOCK_MONOTONIC, &ts1); clock_gettime(CLOCK_MONOTONIC, &ts1); I was seeing clock_getres() return {0, 999848} (about 1ms resolution), On 11/27/11 8:45 PM, Jonathan M Davis wrote:
|
There are plenty of cases where it is far more important to have a monotonic clock than to have higher precision, much as the higher precision my be desired. One common case would be when playing video. The clock used must be monotonic, or you're going to run into issues when the clock gets changed and/or the time drifts. Frames won't be played with the proper timing and the video isn't going to play properly. In other cases, you care more about precision. You'd rather get the time back in microseconds than milliseconds, and the fact that you may occasionally get incorrect results due to the clock being non-monotonic isn't a big deal. In general though, there's no question IMHO, that the clock used for timing needs to be monotonic. The problem, of course, is what to do when the monotonic clock has issues and the choice is either monotonic or high precision. The best choice in that situation depends entirely on the use case. Now, looking at the situation further, it does look like setting So, you're currently suggested change is probably the correct way to go. I'm not enthused about |
I just committed a slightly adjusted version of this fix. |
Awesome. |
Apparently, on some Linux system, clock_gettres reports the wrong resolution. dlang#88
On some linux kernels, clock_getres() returns a bogus resolution (e.g. 999848 nanoseconds, when it should be 1ns). Work around it.