Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Possible performance issue with Instant.FromUnixTimeSeconds #837
(Using NodaTime 2.0.2 in a release-built app on netfx 4.6.1 w. VS2017)
I've been profiling an app which reads log files containing about 100000 records that include a time_t-style timestamp.
Since upgrading to NodaTime2 we've been converting these timestamps to
It appears that this function is actually quite time-consuming, at least when it chooses to use BigInteger internally - reverting to something like
It seems like
I have not yet tried to calculate all the limits here, but is the very poor performance of BigInteger just a fact of life, and are there situations where could be avoided but isn't?
Here's a bit of DotTrace which might be useful.
Jon, thanks for this - the values in the file our test harness was using come from December 2014 - something like 1418688004 is an example.
We would be expecting to use this for data which is being logged in real-time now and for perhaps the next 10 years or so, whatever that would be.
It seems like a bit of a tragedy to have a divide in something which was conceptually just an add-multiply - but I can see that the new Duration-based Instant is often going to need that.
Has there been much, if any, general performance regression in 2.0 as a result of the change away from ticks?
There are some regressions, but there are also big improvements due to a completely different way of representing dates. (That doesn't depend on the nanos part, of course.)
Focusing on just the nanos part, it definitely has an impact on performance, but I expect it to be small (or fixable, like this) for most common use cases... and I believe the use of .NET Core is definitely going to make ticks seem more and more anachronistic.