Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

datetime (convert_datetime_type) seems to add in extra milliseconds #6680

Closed
clarsen opened this issue Jul 27, 2017 · 3 comments · Fixed by #7366
Closed

datetime (convert_datetime_type) seems to add in extra milliseconds #6680

clarsen opened this issue Jul 27, 2017 · 3 comments · Fixed by #7366

Comments

@clarsen
Copy link

clarsen commented Jul 27, 2017

READ AND FOLLOW THESE INSTRUCTIONS CAREFULLY

ISSUES THAT DO NOT CONTAIN NECESSARY INFORMATION MAY BE CLOSED, IMMEDIATELY

The issue tracker is NOT the place for general support. For questions and
technical assistance, come ask the Bokeh mailing list or join the chat on Gitter. For feature requests, please provide a detailed description or proposal of the new capability or behavior.

For defects or deficiencies, please provide ALL OF THE FOLLOWING:

ALL software version info (bokeh, python, notebook, OS, browser, any other relevant packages)

version_json = '''
{
"date": "2017-06-13T09:22:29-0500",
"dirty": false,
"error": null,
"full-revisionid": "6070bf96d9367150f423a4fa792a7d242421236b",
"version": "0.12.6"
}

Description of expected behavior and the observed behavior

the function (snippet shown)
def convert_datetime_type(obj):
# Datetime (datetime is a subclass of date)
elif isinstance(obj, dt.datetime):
return (obj - DT_EPOCH).total_seconds() * 1000. + obj.microsecond / 1000.

that converts a datetime to json for use in plotting seems to not compute the timestamp properly.

Complete, minimal, self-contained example code that reproduces the issue

when I do this by hand with a datetime that has non-zero microseconds,

For the unix timestamp
1501131481.011143 . which has a converted d

>>> import datetime
>>> dt = datetime.datetime.fromtimestamp(1501131481.011143)
>>> dt
datetime.datetime(2017, 7, 26, 21, 58, 1, 11143)
>>> DT_EPOCH=datetime.datetime.utcfromtimestamp(0)
>>> dt-DT_EPOCH
datetime.timedelta(17373, 79081, 11143)
>>> (dt-DT_EPOCH).total_seconds()
1501106281.011143
>>> (dt-DT_EPOCH).total_seconds()*1000.
1501106281011.143
>>> dt.microsecond/1000.
11.143
>>> (dt-DT_EPOCH).total_seconds()*1000.+dt.microsecond/1000.
1501106281022.2861
>>> ((dt-DT_EPOCH).total_seconds()*1000.+dt.microsecond/1000.) - (1501131481.011143 * 1000.)
-25199988.856933594

If anything, the microseconds should be zero since we're just comparing milliseconds_since_epoch to seconds_since_epoch * 1000. The ~25,200,000 difference is due to the naive datetime conversion. That's a 7 hour offset from GMT.

It seems that total_seconds() already gives the fractional seconds, so no need to add in microseconds.

While this doesn't make much difference for data points with second resolution, subsecond resolution results in shifting.

(i'll submit a patch if it isn't already fixed.)

Stack traceback and/or browser JavaScript console output

Screenshots or screencasts of the bug in action

@bryevdv
Copy link
Member

bryevdv commented Jan 3, 2018

Time zone issues should be handled at this point but a mailing list post recently noted the issue with the sub-second component. After a (very) brief look, it seems like the milliseconds component wa doubled which would be explained by the hypotheses above.

@bryevdv
Copy link
Member

bryevdv commented Jan 3, 2018

Thanks for the issue @clarsen Such a simple-to-fix issue should have been resolved immediately. I'd like to apologize that this somehow did not get triaged earlier (and hence was overlooked until now)

@clarsen
Copy link
Author

clarsen commented Feb 19, 2018 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants