Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use GetSystemTimePreciseAsFileTime for DateTime.UtcNow #5061

Closed
GSPP opened this issue May 18, 2016 · 10 comments

Comments

Projects
None yet
8 participants
@GSPP
Copy link

commented May 18, 2016

The GetSystemTimePreciseAsFileTime API provides more precise timing information on Windows 8+. .NET should use this API if available to make additional precision available to .NET code.

Often, the precision of UtcNow is ~15ms. Sometimes the relative order of log items cannot be determined if all log items in close temporal proximity have the same timestamp. Logging has become more important in the cloud. It's very helpful to be able to order events relatively to one another. Also, this sometimes trips people up when trying to benchmark.

This seems like a fairly non-expensive change impacting many applications positively one way or another.

Benchmark results:

    static void Main(string[] args)
    {
        Process.GetCurrentProcess().PriorityClass = ProcessPriorityClass.High;

        {
            var sw = Stopwatch.StartNew();
            long result = 0;
            for (int i = 0; i < 100000000; i++)
            {
                FILE_TIME ft;
                GetSystemTimeAsFileTime(out ft);
                result += ft.ftTimeLow;
            }
            sw.Stop();
            GC.KeepAlive(result);
            Console.WriteLine(sw.Elapsed.TotalMilliseconds);
        }

        {
            var sw = Stopwatch.StartNew();
            long result = 0;
            for (int i = 0; i < 100000000; i++)
            {
                FILE_TIME ft;
                GetSystemTimePreciseAsFileTime(out ft);
                result += ft.ftTimeLow;
            }
            sw.Stop();
            GC.KeepAlive(result);
            Console.WriteLine(sw.Elapsed.TotalMilliseconds);
        }

        Console.ReadKey();
    }

    [StructLayout(LayoutKind.Sequential)]
    struct FILE_TIME
    {
        public int ftTimeLow;
        public int ftTimeHigh;
    }

    [SuppressUnmanagedCodeSecurity, DllImport("kernel32.dll")]
    static extern void GetSystemTimeAsFileTime(out FILE_TIME lpSystemTimeAsFileTime);

    [SuppressUnmanagedCodeSecurity, DllImport("kernel32.dll")]
    static extern void GetSystemTimePreciseAsFileTime(out FILE_TIME lpSystemTimeAsFileTime);

GetSystemTimeAsFileTime: 1341,3188 ms, 70 million calls per second
GetSystemTimePreciseAsFileTime: 3655,7463 ms, 28 million calls per second

Warning: I needed to run this in a VMWare VM so that I could test on a newer Windows version (10). This might skew the timings. Somebody should run that code on a Windows 8+ unvirtualized machine.

Performance seems very sufficient for both versions assuming the benchmark is valid.

If the perf hit is deemed unacceptable it would be nice if this was available as a configuration option. Of course user code could PInvoke to the precision API but that would not impact all other libraries sitting in the same application. All code calling UtcNow would need to be modified which can be impossible (e.g. a logging library).

@omariom

This comment has been minimized.

Copy link
Contributor

commented May 18, 2016

Outputs on my machine (ultrabook, Windows 10)

1435,497
2930,4335
@mj1856

This comment has been minimized.

Copy link
Contributor

commented May 23, 2016

Seems like the tests show that there's a trade off of precision vs throughput. That alone seems like it should be exposed in a separate API. After all, Win32 exposed via a new API instead of improving the perf of their existing one, so why would the decision differ here?

Also, if this was done in UtcNow, would it need to test if GetSystemTimePreciseAsFileTime was available? If so, wouldn't that check add to the overhead? Or could it be done one time elsewhere?

@GSPP

This comment has been minimized.

Copy link
Author

commented May 24, 2016

@mj1856 yes, that check would need to be performed and it would likely take like 2 CPU cycles whereas the whole operation takes ~1000 cycles.

Since we can achieve 28 million calls per second I'm personally not worried about performance at all. This is a super fast API.

It's also hard to see how this could cause a regression. An application would need to spend a significant amount of time in UtcNow calls (like 5% upwards for it to make a meaningful difference). If that is the case the app is not tuned at all and CPU consumption appears to be of no interest to the app customer.

@mj1856

This comment has been minimized.

Copy link
Contributor

commented Jan 30, 2017

I'm on the fence on this.

We have two different low-level system APIs. One is more precise, but offers less throughput than the other. On one hand, we could improve precision for everyone overnight. But on the other hand, who are we to say that nobody wants less precision and more performance? I'm sure there are indeed scenarios where people are timestamping millions of items per second and don't care as much about precision than they do throughput.

Off the top of my head, serving HTTP (Kestrel, etc.) seems to be a good example. Timestamps are needed for response headers, logging, etc. Yes, we have benchmarks for ASP.NET Core on Kestrel of 1.17M req/sec, and that's significantly less than the 28M timestamps/sec measured here - but does that ratio hold up on all hardware this might run on? If, for example, down the road someone is able to run asp.net core on a Raspberry Pi - could this become an unnecessary bottleneck?

BTW - I did a little rough experimentation. It would seem that we are currently using the gettimeofday() function on Linux (here), which gives roughly the same precision as clock_gettime(CLOCK_REALTIME_COARSE), (and GetSystemTimeAsFileTime on Windows).

It seems that if we are to be consistent, we should be using clock_gettime(CLOCK_REALTIME) and GetSystemTimePreciseAsFileTime. But still the question is - do we do this for the existing DateTime.UtcNow API, or do we create a new API for this purpose?

One last point, I was reading today a technet article about improvements to time accuracy in Windows Server 2016, and noticed it calls out GetSystemTimePreciseAsFileTime. If one is considering being intentional about getting an accurate timestamp, then it seems we would want to give them an API that is deliberately so. For example, if I'm trying to meet the 1ms-from-UTC requirements of the ESMA/MiFID II financial sector regulation that is called out in the article, I would feel uncomfortable just using DateTime.UtcNow and hoping that the underlying implementation was indeed the one I wanted. I would be likely to P/Invoke, unless something like DateTime.UtcNowPrecise was there to clue me in.

@halter73

This comment has been minimized.

Copy link
Member

commented Feb 2, 2017

In the case of Kestrel, we query DateTimeOffset.UtcNow no more than once a second no matter the request frequency since we don't care too much about precision.

@vancem

This comment has been minimized.

Copy link
Member

commented Feb 7, 2017

I think we have reduced the issue to either

  • Introducing a UtcNowPrecise API that has increased precision OR
  • Simply making UtcNow more precise (at the cost of making UtcNow less efficient (2-3X slower but it is a small).

I do think that UtcNow main high frequency use is for timestamping. Interestingly if you are timestamping things that take a while (e..g > 1msec), then the cost of UtcNow is will be small in comparison. Thus it is exactly when you want the precision where you are likely to be calling this a lot, so you are only 'saving' cost when it does not matter (when the other work is much larger).

The next point I want to make is that the 16msec granularity IS A PITFALL, because UtcNow IS used as a timestamp for things like requests (which may only take a few msec). Thus there IS value in just making it work as people expect.

Finally I will say that generally speaking in API design we DO differ from the underlying OS in that we try considerably hard to be SIMPLE and to avoid pitfalls. Thus we would not like to introduce API complexity (and pitfalls) unless we do understand why (we can articulate a important performance SCENARIO where the perf difference matters)

If people are truly on the fence here, I would recommend the following

1) Make the change where UtcNow is more precise.

If we find that the performance is an issue, we could then we could add UtcNowLowPrecision that exposes the faster (but less precise) mechanism. This would only be used by advanced people who care about perf but not precision (assuming this set exists). It would be very rarely used, and would not be a pitfall because its name clearly indicates the tradeoff it makes.

My guess, is that that set of people who would want UtcNowLowPrecision is empty so we will end up with a simpler system.

The most important point however is to make a choice (decide which side of the fence we are on).

Comments?

Vance Morrison, .NET Runtime Performance Architect.
@terrajobst

@stephentoub

This comment has been minimized.

Copy link
Member

commented Feb 7, 2017

I would recommend the following

  1. Make the change where UtcNow is more precise.

+1

@benaadams

This comment has been minimized.

Copy link
Collaborator

commented Feb 7, 2017

I think a serious pitfall with it is the fact it is actually very precise; however its not very accurate. The related ticks vary down to the last digit, so by casual observation it can be seen that it varies a lot and its only when you call it closely together that you see there is a problem (or via reading docs or checking SO 😉)

So I'd suggest in addition to any more accurate api changes; that any less accurate api have their ticks precision masked off so the precision matches the accuracy - so from observation of the Ticks or milliseconds the accuracy is also expressed and the precision is not giving a false sense of accuracy.

@vancem

This comment has been minimized.

Copy link
Member

commented Feb 7, 2017

I agree that unhelpful precision should be rounded away. It is easy enough to do. Again, ideally no pitfalls (if we can avoid them easily).

@mj1856

This comment has been minimized.

Copy link
Contributor

commented Feb 24, 2017

WRT accuracy, note that with Windows Server 2016, and Windows 10 anniversary update, the accuracy has been improved substantially at the OS level.

https://technet.microsoft.com/en-us/windows-server-docs/identity/ad-ds/get-started/windows-time-service/windows-2016-accurate-time

Of course one still needs to have a reliable time source, but things are nowhere near as bad as they used to be.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.