New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Truncate time.Time to microsecond resolution before inserting #227
Comments
Good catch. Technically, this is silent data loss akin to the |
Here's a test program that demonsrtates: package main
import (
"database/sql"
"fmt"
"time"
_ "github.com/lib/pq"
)
func panicIfErr(err error) {
if err != nil {
panic(err)
}
}
func main() {
db, err := sql.Open("postgres", "sslmode=disable")
panicIfErr(err)
defer db.Close()
_, err = db.Exec(`CREATE TEMP TABLE joeshaw_time_test ("t" timestamp with time zone);`)
panicIfErr(err)
goTime := time.Date(2014, 1, 31, 14, 50, 20, 720408938, time.Local)
fmt.Println("Original Go time:", goTime)
_, err = db.Exec(`INSERT INTO joeshaw_time_test VALUES ($1);`, goTime)
panicIfErr(err)
rows, err := db.Query(`SELECT t FROM joeshaw_time_test;`)
panicIfErr(err)
defer rows.Close()
var sqlTime time.Time
for rows.Next() {
err := rows.Scan(&sqlTime)
panicIfErr(err)
fmt.Println("Time retrieved from pg:", sqlTime)
break
}
err = rows.Err()
panicIfErr(err)
truncGoTime := goTime.Truncate(time.Microsecond)
fmt.Println("Go time truncated to ms:", truncGoTime)
fmt.Println("sqlTime == truncGoTime:", truncGoTime.Equal(sqlTime))
} |
@deafbybeheading oh, because my eyes apparently aren't working and I didn't see Welp, that'll work fine for me then. Shall I close this? |
@joeshaw nah, I'd like to leave it open--I think we should explicitly round in |
I agree, we should explicitly truncate or round and document the behaviour. Naively thinking I think we ought to truncate (in order not to push timestamps into the future), but I doubt any of that matters in the case of nanoseconds vs. microseconds. Rounding would preserve backwards compatibility, if that's what postgres is doing. The user can truncate before handing the time.Time to database/sql if that's the behaviour he wants. So I guess my vote is for rounding. |
Actually, I traced down the code (DecodeNumberField in src/backend/utils/adt/datetime.c) and it calls rint(), which rounds halfway values to the nearest even value:
time.Round() guarantees to round them up. So both time.Round() and time.Truncate() will break backwards compatibility. |
Any thoughts on this? I think we should just Round() in pq. This would still break roughly 1/20 of the time, but I'd be surprised if someone was relying on PG's behaviour in their code or test cases right now. |
Yeah, I think |
When I ran into this (also in unit tests) I used Round() to solve it, so it's precisely the expected behavior in my opinion. |
Of course, time.Round() and time.Truncate() are Go 1.1+ only :-(. |
Now that we're not stuck on Go 1.0 anymore, I think we should time.Truncate() here. It's less compatible with the old behaviour than Round(), but it feels icky pushing times into the future, even if it's about microseconds. What do you think? |
I think we should leave this to the user. It is true that Postgres and Go have differing precisions for temporal values. Users (and their tests) that are concerned about nanosecond precision should notice (and their tests should fail.) I think it is the responsibility of the driver to ensure values it sends to the backend are semantically equivalent to the values it receives from the user. Conversely, it should ensure values received from the backend are semantically equivalent to the values it sends to the user. |
I agree with @cbandy here. Don't make assumptions on what the user might want. I think the real fix is a patch to postgres to support nanosecond precision. |
That's never gonna happen, so it's either time.Truncate() or status quo. |
@johto Is there a special reason for that? I cannot find any discussion regarding that. |
You'd have to either make the on-disk footprint of timestamps bigger than it already is (8 bytes), or greatly reduce the range of possible timestamps for (at least with current hardware) no gain. And in both cases breaking backwards compatibility for an absolutely massive number of applications. |
@johto Does |
…time (#1774) Change the `TestInsertOrUpdateAsset` test to `Round` instead of `Truncate` time values that are being compared with time that was stored in Postgres. Intermittently the `TestInsertOrUpdateAsset` test will fail because a time value that's been stored in Postgres doesn't match the time value we gave to Postgres to store. We attempt to do our best to make them match by using the `Truncate` value on both the input and output values, but that doesn't work for some cases. I managed to reproduce the failure 100% of the time by using a specific input time of `12:08:50.9419997` where the `milliseconds` component will be rounded up because of the values in the `nanosecond` component by Postgres when it reduces the value to `microseconds`. To see the exact failure case, take a look at the first commit where I hardcoded it for consistent failures. This isn't the only failure case though. When I ran the tests thousands of times I had plenty of times where it failed. Because we use the `Truncate` function, we are comparing a rounded down value with a rounded up value in any cases where the millisecond component gets rounded up when the microsecond component is rounded up. To illustrate exactly what is happening here, this is our original time: ``` 12:08:50.9419997 ``` When Postgres is truncating it to microseconds, which is six decimal places, it is rounding to the nearest, which in this case is up: ``` 12:08:50.942000 ``` In our test we use the truncate function to truncate to milliseconds: ``` 12:08:50.941 ``` Using `Round` instead causes us to round up to the same value. The fix in this change isn't perfect according to discussion at lib/pq#227 (comment), where a commenter states that Postgres' rounding function rounds to nearest and *in the event of a tie rounds to even*, which is different to Go's `Round` function which rounds to nearest and *in the event of a tie rounds away from zero*. This would cause a problem in this situation: However, in the tests I ran with time `12:08:50.9419995` and rounding to the nearest `microsecond`, both Postgres and Go had rounded in the same directions, so I don't see evidence that this is an issue. Even if this was still an issue I think this change is the best simple change we can make to the test and it would significantly reduce the number of cases this test fails. Close #1733
PR to resolve this issue with Postgres accuracy limit https://www.postgresql.org/docs/current/datatype-datetime.html |
@johto Why couldn't Postgres have a |
tests were failing when comparing retrieved user object and created user object timestamp values,postgresql stores timestamps in microsecond resolution while Go supports nanosecond resolution and pq formats them in RFC3339Nano format which include nanoseconds Refer to this issue lib/pq#227 and this stackoverflow answer https://stackoverflow.com/a/60434843
…time (#1774) Change the `TestInsertOrUpdateAsset` test to `Round` instead of `Truncate` time values that are being compared with time that was stored in Postgres. Intermittently the `TestInsertOrUpdateAsset` test will fail because a time value that's been stored in Postgres doesn't match the time value we gave to Postgres to store. We attempt to do our best to make them match by using the `Truncate` value on both the input and output values, but that doesn't work for some cases. I managed to reproduce the failure 100% of the time by using a specific input time of `12:08:50.9419997` where the `milliseconds` component will be rounded up because of the values in the `nanosecond` component by Postgres when it reduces the value to `microseconds`. To see the exact failure case, take a look at the first commit where I hardcoded it for consistent failures. This isn't the only failure case though. When I ran the tests thousands of times I had plenty of times where it failed. Because we use the `Truncate` function, we are comparing a rounded down value with a rounded up value in any cases where the millisecond component gets rounded up when the microsecond component is rounded up. To illustrate exactly what is happening here, this is our original time: ``` 12:08:50.9419997 ``` When Postgres is truncating it to microseconds, which is six decimal places, it is rounding to the nearest, which in this case is up: ``` 12:08:50.942000 ``` In our test we use the truncate function to truncate to milliseconds: ``` 12:08:50.941 ``` Using `Round` instead causes us to round up to the same value. The fix in this change isn't perfect according to discussion at lib/pq#227 (comment), where a commenter states that Postgres' rounding function rounds to nearest and *in the event of a tie rounds to even*, which is different to Go's `Round` function which rounds to nearest and *in the event of a tie rounds away from zero*. This would cause a problem in this situation: However, in the tests I ran with time `12:08:50.9419995` and rounding to the nearest `microsecond`, both Postgres and Go had rounded in the same directions, so I don't see evidence that this is an issue. Even if this was still an issue I think this change is the best simple change we can make to the test and it would significantly reduce the number of cases this test fails. Close #1733
Postgres (at least 9.3) can only store timestamps in microsecond resolution. Go supports nanosecond resolution and
pq
formats them inRFC3339Nano
format, which includes nanoseconds.Postgres takes the nanoseconds and rounds them to the nearest microsecond. Go's
time.Time
has aTruncate
method which will truncate, rather than round, to the provided precision.When writing unit tests to ensure that the data written to the database is as expected, I (half of the time) get failures due to the different behavior. It would seem like truncating the
time.Time
to microsecond resolution before inserting in Postgres is the right thing to do, to get more predictable behavior between Postgres and Go.The text was updated successfully, but these errors were encountered: