Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changing UDP precision can corrupt default timestamp decoding #8424

Closed
jleclanche opened this issue May 24, 2017 · 4 comments
Closed

Changing UDP precision can corrupt default timestamp decoding #8424

jleclanche opened this issue May 24, 2017 · 4 comments

Comments

@jleclanche
Copy link

Bug report

System info: InfluxDB 1.2, Debian Stretch

Steps to reproduce:

I started by creating a new database, importing old data into it, which had been input using influx's http api, most of them with precision=ms.
I then enabled the udp api, didn't specify a precision and started writing measurements without including a timestamp. Everything was okay; I then set precision="s", everything was still okay but at that point I found out I needed higher precision accuracy, so I switched the udp input to ms (still not sending timestamps through udp).

In doing so, all new timestamps are now off by a factor of 1000. I then changed udp.precision back to an empty string, but timestamps are now still off by 1000x. In fact, if I change it to anything else than "s", it's broken.

I don't understand how this behaviour makes sense. The UDP writer should know what unit to use to write to the database; the input precision setting should only be used for decoding timestamp values. When a timestamp value is not specified, it should set the precision of the value it generate, but when writing it to influx it should still know to write a correct timestamp.

In other words, setting the precision without specifying a timestamp should never write bad timestamps; and precision="" should especially not write bad timestamps.

@jleclanche
Copy link
Author

Thinking through the code:

  1. The parsing happens here in service.go, using ParsePointsWithPrecision with a default time.Now().UTC().
  2. This jumps to parsePoint, passing the precision argument.
  3. We call scanTime(), which sets the .ts attribute to the timestamp string
  4. We then create a point as pt, pt.ts is the timestamp string
  5. if pt.ts is empty, pt.time is set to defaultTime and pt.SetPrecision(precision) is called.
  6. Doing so truncates the time to match that precision.

@jleclanche
Copy link
Author

okay, wtf? I dropped and recreated the database and it still has the same issue unless I use s precision.

@dgnorton dgnorton added the 1.x label Jan 7, 2019
@stale
Copy link

stale bot commented Jul 24, 2019

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the wontfix label Jul 24, 2019
@stale
Copy link

stale bot commented Jul 31, 2019

This issue has been automatically closed because it has not had recent activity. Please reopen if this issue is still important to you. Thank you for your contributions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants