New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possible performance issue with Instant.FromUnixTimeSeconds #837
Comments
Thanks, I'll have a look. Could you give an example of the values you're using? I'll add a benchmark, then try to optimize... (Hopefully it'll be as simple as being a little less conservative with the optimization.) |
Jon, thanks for this - the values in the file our test harness was using come from December 2014 - something like 1418688004 is an example. We would be expecting to use this for data which is being logged in real-time now and for perhaps the next 10 years or so, whatever that would be. |
Righto, thanks - will definitely take a look. |
It seems like a bit of a tragedy to have a divide in something which was conceptually just an add-multiply - but I can see that the new Duration-based Instant is often going to need that. Has there been much, if any, general performance regression in 2.0 as a result of the change away from ticks? |
There are some regressions, but there are also big improvements due to a completely different way of representing dates. (That doesn't depend on the nanos part, of course.) Focusing on just the nanos part, it definitely has an impact on performance, but I expect it to be small (or fixable, like this) for most common use cases... and I believe the use of .NET Core is definitely going to make ticks seem more and more anachronistic. |
@jskeet I applaud the nanoseconds, don't worry. It's |
Will validate fix for nodatime#837.
Early indications suggest we can make this about 30x faster (500ns -> 17ns). (That's measuring on netcoreapp1.1. We may be able to improve net45 further using DivRem.) Thanks for pointing this out - massive win available :) |
Will validate fix for nodatime#837.
(Using NodaTime 2.0.2 in a release-built app on netfx 4.6.1 w. VS2017)
I've been profiling an app which reads log files containing about 100000 records that include a time_t-style timestamp.
Since upgrading to NodaTime2 we've been converting these timestamps to
Instant
usingInstant.FromUnixTimeSeconds
.It appears that this function is actually quite time-consuming, at least when it chooses to use BigInteger internally - reverting to something like
_epoch.PlusNanoseconds(time_t * 1000000000L)
(where _epoch is a staticInstant
in 1970) has just given us an enormous speed-up on the app (roughly 2:1 on the whole file processing routine, which was a considerable surprise).It seems like
Duration operator *
has some logic to try and avoidBigInteger
if it can, but it would appear in our case that BigInteger can be avoided but Duration has not chosen to do so.I have not yet tried to calculate all the limits here, but is the very poor performance of BigInteger just a fact of life, and are there situations where could be avoided but isn't?
Here's a bit of DotTrace which might be useful.
The text was updated successfully, but these errors were encountered: