-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use GetSystemTimePreciseAsFileTime for DateTime.UtcNow #5883
Comments
Outputs on my machine (ultrabook, Windows 10)
|
Seems like the tests show that there's a trade off of precision vs throughput. That alone seems like it should be exposed in a separate API. After all, Win32 exposed via a new API instead of improving the perf of their existing one, so why would the decision differ here? Also, if this was done in |
@mj1856 yes, that check would need to be performed and it would likely take like 2 CPU cycles whereas the whole operation takes ~1000 cycles. Since we can achieve 28 million calls per second I'm personally not worried about performance at all. This is a super fast API. It's also hard to see how this could cause a regression. An application would need to spend a significant amount of time in UtcNow calls (like 5% upwards for it to make a meaningful difference). If that is the case the app is not tuned at all and CPU consumption appears to be of no interest to the app customer. |
I'm on the fence on this. We have two different low-level system APIs. One is more precise, but offers less throughput than the other. On one hand, we could improve precision for everyone overnight. But on the other hand, who are we to say that nobody wants less precision and more performance? I'm sure there are indeed scenarios where people are timestamping millions of items per second and don't care as much about precision than they do throughput. Off the top of my head, serving HTTP (Kestrel, etc.) seems to be a good example. Timestamps are needed for response headers, logging, etc. Yes, we have benchmarks for ASP.NET Core on Kestrel of 1.17M req/sec, and that's significantly less than the 28M timestamps/sec measured here - but does that ratio hold up on all hardware this might run on? If, for example, down the road someone is able to run asp.net core on a Raspberry Pi - could this become an unnecessary bottleneck? BTW - I did a little rough experimentation. It would seem that we are currently using the It seems that if we are to be consistent, we should be using One last point, I was reading today a technet article about improvements to time accuracy in Windows Server 2016, and noticed it calls out |
In the case of Kestrel, we query DateTimeOffset.UtcNow no more than once a second no matter the request frequency since we don't care too much about precision. |
I think we have reduced the issue to either
I do think that UtcNow main high frequency use is for timestamping. Interestingly if you are timestamping things that take a while (e..g > 1msec), then the cost of UtcNow is will be small in comparison. Thus it is exactly when you want the precision where you are likely to be calling this a lot, so you are only 'saving' cost when it does not matter (when the other work is much larger). The next point I want to make is that the 16msec granularity IS A PITFALL, because UtcNow IS used as a timestamp for things like requests (which may only take a few msec). Thus there IS value in just making it work as people expect. Finally I will say that generally speaking in API design we DO differ from the underlying OS in that we try considerably hard to be SIMPLE and to avoid pitfalls. Thus we would not like to introduce API complexity (and pitfalls) unless we do understand why (we can articulate a important performance SCENARIO where the perf difference matters) If people are truly on the fence here, I would recommend the following
If we find that the performance is an issue, we could then we could add UtcNowLowPrecision that exposes the faster (but less precise) mechanism. This would only be used by advanced people who care about perf but not precision (assuming this set exists). It would be very rarely used, and would not be a pitfall because its name clearly indicates the tradeoff it makes. My guess, is that that set of people who would want UtcNowLowPrecision is empty so we will end up with a simpler system. The most important point however is to make a choice (decide which side of the fence we are on). Comments? Vance Morrison, .NET Runtime Performance Architect. |
+1 |
I think a serious pitfall with it is the fact it is actually very precise; however its not very accurate. The related ticks vary down to the last digit, so by casual observation it can be seen that it varies a lot and its only when you call it closely together that you see there is a problem (or via reading docs or checking SO 😉) So I'd suggest in addition to any more accurate api changes; that any less accurate api have their ticks precision masked off so the precision matches the accuracy - so from observation of the Ticks or milliseconds the accuracy is also expressed and the precision is not giving a false sense of accuracy. |
I agree that unhelpful precision should be rounded away. It is easy enough to do. Again, ideally no pitfalls (if we can avoid them easily). |
WRT accuracy, note that with Windows Server 2016, and Windows 10 anniversary update, the accuracy has been improved substantially at the OS level. Of course one still needs to have a reliable time source, but things are nowhere near as bad as they used to be. |
This discussion makes sense to me. Thanks for sharing your thoughts on this. |
The
GetSystemTimePreciseAsFileTime
API provides more precise timing information on Windows 8+. .NET should use this API if available to make additional precision available to .NET code.Often, the precision of
UtcNow
is ~15ms. Sometimes the relative order of log items cannot be determined if all log items in close temporal proximity have the same timestamp. Logging has become more important in the cloud. It's very helpful to be able to order events relatively to one another. Also, this sometimes trips people up when trying to benchmark.This seems like a fairly non-expensive change impacting many applications positively one way or another.
Benchmark results:
GetSystemTimeAsFileTime: 1341,3188 ms, 70 million calls per second
GetSystemTimePreciseAsFileTime: 3655,7463 ms, 28 million calls per second
Warning: I needed to run this in a VMWare VM so that I could test on a newer Windows version (10). This might skew the timings. Somebody should run that code on a Windows 8+ unvirtualized machine.
Performance seems very sufficient for both versions assuming the benchmark is valid.
If the perf hit is deemed unacceptable it would be nice if this was available as a configuration option. Of course user code could PInvoke to the precision API but that would not impact all other libraries sitting in the same application. All code calling
UtcNow
would need to be modified which can be impossible (e.g. a logging library).The text was updated successfully, but these errors were encountered: