Add MonoTime as a replacement for TickDuration #847
Conversation
// but we avoid it here in order to avoid the rounding errors that it | ||
// introduces. | ||
return ticks / srcTicksPerSecond * dstTicksPerSecond + | ||
ticks % srcTicksPerSecond * dstTicksPerSecond / srcTicksPerSecond; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where does this come from and why is it better that ticks * dstTicksPerSecond / srcTicksPerSecond
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's been a while since I wrote this, but I believe that the idea was that the values wouldn't overflow as quickly this way. It converts the seconds separately from the fractional seconds so that the resultant number isn't as large in the middle of the conversion. For instance, with the frequencies 10_000_000
and 1_000_000
and the value long.max >> 20
for ticks
, the conversion from 1_000_000
to 10_000_000
overflows and ends up with a negative value with your equation (and long.max >> 19
overflows in both directions), whereas with my equation, it's actually able to avoid overflow with values as large as long.max >> 1
.
And actually, it seems that my equation avoids overflow better than using a floating point conversion would (I had assumed that they would be comparable but that the floating point would have trouble with not being able to represent some numbers, making it less desirable). By having
return cast(long)(cast(real)(ticks * dstTicksPerSecond) / srcTicksPerSecond)
it seems to be identical to your version (i.e. the cast to real
has no real effect), whereas
return cast(long)(ticks / (cast(real)srcTicksPerSecond) * dstTicksPerSecond);
is able to deal with much higher numbers, but it still chokes somewhat sooner than my equation (e.g. long.max >> 4
is fine, but long.max >> 3
is too large and overflows).
I actually tried to find somewhere online that discussed the best way to do this conversion, but I wasn't able to find anything (e.g. rounding would arguably improve the accuracy in some circumstances, but I don't know if that's really better or not). This equation was the best that I was able to come up with. And as far as I can tell, it's more accurate, because it's able to deal with larger numbers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The rounding error is slightly larger than with ticks * dstTicksPerSecond / srcTicksPerSecond
because you're dividing the fraction of a second by the full srcTicksPerSecond
in ticks % srcTicksPerSecond * dstTicksPerSecond / srcTicksPerSecond
.
Should be fine though. Maybe the new overflow safe arithmetic might be useful.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just seeing this pull now.
It occurs to me that the _ticksPerSecond
value is immutable, decided upon program start. You could formulate a specific value to multiply or divide in order to do conversions at that same time. Basically, the correct value that represents dstTicksPerSecond/srcTicksPerSecond
, which you would multiply by, or the inverse which you would divide by depending on which is larger (would require a flag to determine that also).
Edit: This assumes that ticks/sec for dst and src are always divisible (e.g. always powers of 10). Obviously, if they are not divisible, then this wouldn't work.
How about using a delayed conversion to See here for a sketch that uses alias this to implicitly convert a MonoTime.Diff to a Duration. |
I don't know. I really question that it's worth it. Granted, So, I guess that it comes down to the question of whether adding the diff back is something that's going to be done frequently enough to be worth the extra complication of adding a different type for the diff. Having it implicitly convert to |
OK, let's wait until someone reports an issue. |
I agree with @jmdavis regarding the specialized duration. At this point, having had experience with std.datetime, and not knowing what types to use, I'd rather move towards a single duration type. In fact, I wouldn't be opposed at some point to have Duration be parameterized as to what unit it is, defaulting to hnsecs. |
Ah, but then you can't accept a single My take on it is that hnsecs is high enough precision for almost everything, and with |
Sure you can. The default would be hnsecs, that would be the type you accept. I'm wondering if it's possible either now or the future to I also think In any case, not something that needs to happen now. I am not familiar with C++11's datetime code, I have used boost's, and it's sucky. |
So can we go on with this pull? |
haven't had a chance to do a full review, I was just reading the comments. I will review and get back to you. I agree with the concept. |
using with timers or other types of code which must do high precision | ||
timing as opposed to dealing with the wall clock time. Wall clock time (and | ||
thus $(XREF datetime, SysTime)) should never be used in situations where | ||
having the clock go backwards would cause problems. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This description is awkward. A monotonic clock always moves forwards, but the description seems to suggest that it moves only once per second. Perhaps:
A monotonic clock is one which always moves forward, regardless of any changes to the system's wall clock (accessed via $(XREF datetime, SysTime)). Because of it's only-increasing nature, it is ideal for use with timers or high precision timing functions that would be adversely affected by adjustments typically made to a system clock (e.g. an adjustment made by NTP or daylight savings adjustments). The resolution of the timer is system dependent, and can be determined by a call to $(D ticksPerSecond).
I know this is not exactly the goal of this pull, but having a function which is the most effective and least error-prone mechanism to get the difference between wall clock time and monotonic time would be awesome to have. |
I'm afraid that I don't understand what such a function would do. Monotonic time and wall clock time are fundamentally different. What "difference" would you be looking for? The only thing that I can think of is if you recorded both the monotonic time and the wall clock time and then checked them later, you could see how much the wall clock time had drifted, but that's pretty much just a matter of recording them, diffing them with the new values later, and comparing the resulting |
Where I have experienced this need is in a logging mechanism. You want the log to display "normal" date/times, but you don't want to use the wall clock to track the actual timing (to avoid jumps back in time inside the log file). So what you do is calculate an "offset" between wall time and monotonic time. Then you use monotonic time to track when a log message is output. To figure out what to display, you add the offset, then treat it like it was wall time. The mechanism to do the calculation of the offset is not quite trivial, and can't be guaranteed accurate on a non-realtime system, but having something that is "as close to accurate" as possible, would be nice. In any case, not subject to this pull request, just a comment of something I have needed. |
Isn't this good enough? auto offset = Clock.currTime();
auto t0 = MonoTime.currTime();
//...
auto t1 = MonoTime.currTime();
writeln(offset + cast(Duration)(t1 - t0)); |
Actually, that probably is good enough. In my past experience, the time type for monotonic time, and wall time, was |
Okay. I've made the updates, and I think that we're good to go now. |
Just wanted to note that I'm out of town for a few days, without computer access. I will get to the review of this next week. If it can't wait, I'm pretty confident Jonathan probably addressed everything, so feel free to merge if it looks good. Sorry about not getting to it earlier. |
Quick note: could we use |
Well, if you really want to, I suppose that we could, but I would have expected |
@@ -3715,6 +4315,11 @@ string numToString(long value) @safe pure nothrow | |||
assert(0, "Something threw when nothing can throw."); | |||
} | |||
|
|||
version(unittest) const(char)* numToStringz(long value) @safe pure nothrow |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this private?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. There's a private:
on line 3942, so everything after that is in private.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, thanks, missed that in the GitHub display even after expanding the context.
ping |
AFAIK. I was hoping that Andrei would reply to my response on renaming |
Any other comments on this, or can we get it merged? |
I'd merge it, but I'm not a committer for druntime. |
Really? Hmm, well you could probably become one if you asked for it, since you're a committer for Phobos now. And if you've done much with druntime, you probably should. I could do it, but we're not really supposed to be merging our own stuff... |
I'll merge once the latest test run completes. |
Auto-merge toggled on |
Thanks! |
Add MonoTime as a replacement for TickDuration
I mostly agree with this argument. |
Sorry for missing this pull. I see you decided to use system ticks instead of The only thing that's missing is |
Do we need |
|
||
T before = MonoTime.currTime; | ||
test(before, MonoTime(before._ticks + 4202), Duration.zero); | ||
test(before, MonoTime.currTime, Duration.zero); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test has recently started failing on my machine. Any ideas what could have caused it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps the monotonic clock has overflown? What's your OS and uptime?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Win7-64, 32-bit druntime build. Uptime is ~30 days.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Uptime is ~30 days.
Yeah, that's probably it. MonoTime can overflow, the test is invalid.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
MonoTime
had better not be overflowing. The precision of the system clock and how many ticks it can fit in a long
should be such that it won't be able to overflow for decades if not centuries. The only way that it could overflow is if the system clock didn't start anywhere near 0, and from everything I've seen, all of the systems we support start the system clock at tick 0. If the system clock is actually going to overflow, then MonoTime
needs a redesign - particularly with regards to opCmp
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had always thought that the sytem clock was fast enough in comparison to the CPU that you'd never be able have two calls to MonoTime.currTime
return the same result, but my knowledge of stuff that close to the hardware is unfortunately rather poor. Sadly, about all that 3.4GHz means to me is that it's faster than 3.2GHz. :) I should probably fix that...
In any case, #943 should fix this for you.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, 3.4GHz translates to one cycle taking ~0.3ns. The number of cycles a set of instructions take to execute is fairly difficult to predict these days, but it's quite possible it fits within the 1000 available if it gets lucky with the timing and the cache.
I've always liked this article on computer architecture: http://www.lighterra.com/papers/modernmicroprocessors/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
MonoTime.ticksPerSecond is 3312675
On Windows, QueryPerformanceCounter/Frequency does not work with processor ticks on mobile processors that tend to change clock speed. I've also seen resolutions in the low MHz range.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then what function should we use instead? From what I've read and dealt with before, QueryPerformanceCounter
is the only function I know of to use (and I'd consider it highly buggy if it doesn't work on mobile processors, but that doesn't mean that we don't have to figure out how to deal with it). And when you talk about mobile processors, are you talking about phones and tablets? Or are you talking about laptops? And if it's phones and tablets, do you know if it affects x86* systems, or is it ARM-only?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They talk about fallback hardware clock (with probably low precision) to avoid the speed change issue.
http://msdn.microsoft.com/en-us/library/windows/desktop/dn553408(v=vs.85).aspx#general_faq_about_qpc_and_tsc
It still like the best option, just fix your test.
TickDuration
conflates a timestamp of the monotonic clock and a duration in ticks of the monotonic clock, which is causing problems and confusion. In addition, it doesn't follow the same API as Duration (since it was developed separately by SHOO), even using the same function names for different things (e.g.Duration.msecs
gives the milliseconds split out whereasTickDuration.msecs
gives the total in miliseconds). And IMHO, having two duration types is overly confusing anyway. By having only one duration type, it's then quite clear which duration type code should use, whereas if they're are multiple, folks then have to understand the differences between them in order to figure out which to use where.So, I'm proposing that we add
MonoTime
. It will represent a timestamp of the monotonic clock and will have almost no functionality on it (since it doesn't usually make sense to do much directly with a timestamp of the monotontic clock). It can be compared and subtracted but not much else. When subtracting twoMonoTime
s, you'll get aDuration
. That does result in some rounding errors (because the clock frequency is being converted to hecto-nanoseconds) and might result in lower precision (though many systems don't have a clock resolution/frequency as a high as hecto-nanoseconds anyway), but it's eminently usable and user-friendly., and most code doesn't need precision higher than hecto-nanoseconds anyway.For those rare cases where someone needs to actually operate in system ticks, they can use the
ticks
property of aMonoTime
and itsticksPerSecond
property to operate in ticks likeTickDuration
would do now. But most folks shouldn't need to do that.This should simplify things considerably. The one area that might annoy people is that
Duration
has no support for floating point calculations (whereasTickDuration
does), but in most cases, it's a bad idea to use floating point values when dealing with time, so in most cases, people probably shouldn't be using floating point anyway. But that can be discussed separately as an enhancement toDuration
if it's deemed appropriate.MonoTime
itself doesn't care.For now,
TickDuration
has not been deprecated, because it's still used by the benchmarking functions in std.datetime. However, when I split std.datetime (which should be happening soon), I intend to create versions of those functions which useDuration
, and put them in std.benchmark, where they're supposed to eventually end up anyway (but haven't due to issues with implementing std.benchmark). The old versions of those functions which useTickDuration
, along withTickDuration
, itself can then be deprecated.I have however removed the few uses of
TickDuration
in druntime outside of core.time with this pull.