Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Millis precision vs. nano precision of java.time.Instant on Java 10+ [DATAMONGO-2014] #2883

Closed
spring-projects-issues opened this issue Jun 25, 2018 · 6 comments
Assignees
Labels
in: core status: declined type: enhancement

Comments

@spring-projects-issues
Copy link

spring-projects-issues commented Jun 25, 2018

Cesar Tron-Lozai opened DATAMONGO-2014 and commented

We started noticing that some of our unit test where failing on Java 10. Those test were comparing Instants saved to MongoDB.

I understand that according to JSR-310, Spring Data stores Instant in MongoDB using the millisecond component. That worked fine with Java 8 because the default Clock implementation was using System.createTimeMillis() to create a new instant.

However the default Clock in Java 10 (SystemClock) looks like this:

@Override
public Instant instant() {
    // Take a local copy of offset. offset can be updated concurrently
    // by other threads (even if we haven't made it volatile) so we will
    // work with a local copy.
    long localOffset = offset;
    long adjustment = VM.getNanoTimeAdjustment(localOffset);

    if (adjustment == -1) {
        // -1 is a sentinel value returned by VM.getNanoTimeAdjustment
        // when the offset it is given is too far off the current UTC
        // time. In principle, this should not happen unless the
        // JVM has run for more than ~136 years (not likely) or
        // someone is fiddling with the system time, or the offset is
        // by chance at 1ns in the future (very unlikely).
        // We can easily recover from all these conditions by bringing
        // back the offset in range and retry.

        // bring back the offset in range. We use -1024 to make
        // it more unlikely to hit the 1ns in the future condition.
        localOffset = System.currentTimeMillis()/1000 - 1024;

        // retry
        adjustment = VM.getNanoTimeAdjustment(localOffset);

        if (adjustment == -1) {
            // Should not happen: we just recomputed a new offset.
            // It should have fixed the issue.
            throw new InternalError("Offset " + localOffset + " is not in range");
        } else {
            // OK - recovery succeeded. Update the offset for the
            // next call...
            offset = localOffset;
        }
    }
    return Instant.ofEpochSecond(localOffset, adjustment);
}

The instant created with this clock have nano seconds adjustment. Which means if you have an entity Foo with a field of type Instant. The instant will be different after being fetched from MongoDB.

I'm not saying this is a bug, but I'd be curious to discuss strategies to mitigate this potential issue.

 


Issue Links:

  • DATACMNS-1349 CustomConversions should allow for selective registration of default converters

  • DATAMONGO-2017 Use driver codec for Instant, LocalTime, LocalDate, and LocalDateTime

@spring-projects-issues
Copy link
Author

spring-projects-issues commented Jun 27, 2018

Mark Paluch commented

Instant is converted to java.util.Date using Date#toInstant() and read back by applying the system-default ZoneId.

As far as I understand there is no Clock involved in Date to Instant conversion and vice versa. Care to elaborate how the Clock is involved?

See:

@spring-projects-issues
Copy link
Author

spring-projects-issues commented Jun 27, 2018

Cesar Tron-Lozai commented

 Sorry I realise I wasn't being clear. The clock is indeed not used in the process. We use Clock to generate Instants in our application. So we basically used to have Instant without nanoseconds adjustment which worked great; now do and it causes problems when being persisted.

As I said it's not an issue in Spring Data per se, but it can be a trap for other people too. I wanted to raise this as a discussion to see if we could mitigate this issue. For example generating a warning if an instant with nanoseconds would be persisted. I'm open to any idea. 

@spring-projects-issues
Copy link
Author

spring-projects-issues commented Jun 28, 2018

Mark Paluch commented

Thanks for your explanation. I adjusted the ticket title to reflect the actual concern. As you already mentioned, MongoDB provides only millisecond-precision which truncates the nano-adjustment of Instant when reading the date back.
Depending on your integration- and query-requirements, one could represent Instant differently within MongoDB.

As Document:

{
  instant: {seconds: 1530170875, nanos: 473340000}
}

As String:

{
  instant: "2018-06-28T07:27:28.651722123Z"
}

Both should be possible by providing custom Converter s. Storing Instant as MongoDB date is a logical choice and there's little we could do from our side. Using a different type require tradeoffs and I don't think that logging precision loss is feasible as it would undergo either unnoticed or spam the logs. It would make sense to document Instant type mapping in our reference documentation and mention the precision issue there.

@spring-projects-issues
Copy link
Author

spring-projects-issues commented Jul 2, 2018

Mark Paluch commented

In the light of the MongoDB 3.8 driver upgrade, the driver is now capable of converting Instant objects. Driver-side conversion sticks to MongoDB's ISODate precision. See the comment above for possible application-side alternatives.

According to BSON spec:

UTC datetime - The int64 is UTC milliseconds since the Unix epoch.

We will investigate native JSR-310 type support with DATAMONGO-2017 which leaves us with the only option to close this ticket.

@spring-projects-issues
Copy link
Author

spring-projects-issues commented Jul 2, 2018

Cesar Tron-Lozai commented

It's fair to close the ticket.

Would it be possible to add a warning somewhere in the Documentation to warn/remind users that Instants are stored with millisecond precision? There could be a link to the Mongo spec to highlight the fact that this is not a Spring level decision

@spring-projects-issues
Copy link
Author

spring-projects-issues commented Jul 3, 2018

Mark Paluch commented

We considered adding a note to the docs but decided against doing so. Documenting each detail (there are a lot of details in various areas) would bloat our documentation and blur the significant parts

@spring-projects-issues spring-projects-issues added status: declined type: enhancement in: core labels Dec 30, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
in: core status: declined type: enhancement
Projects
None yet
Development

No branches or pull requests

2 participants