Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Timestamps are incorrectly parsed in Ceilometer statistics #928

Open
petergardfjall opened this issue Jan 27, 2017 · 6 comments
Open

Timestamps are incorrectly parsed in Ceilometer statistics #928

petergardfjall opened this issue Jan 27, 2017 · 6 comments
Labels

Comments

@petergardfjall
Copy link
Contributor

petergardfjall commented Jan 27, 2017

Running openstack4j 3.0.3 I see some weird parsing behavior when query results from Ceilometer are being deserialized.

The following code:

List<? extends Statistics> statistics = client.telemetry().meters().statistics(meter, criteria, downsamplingPeriod);
for (Statistics stat : statistics) {
    this.logger.debug("got statistic: {}", stat);
}

yields the following result (with request logging turned on):

Jan 27, 2017 1:41:06 PM org.glassfish.jersey.filter.LoggingFilter log
INFO: 1 * Client response received on thread pool-1-thread-2
1 < 200
1 < Content-Length: 763
1 < Content-Type: application/json; charset=UTF-8
1 < Date: Fri, 27 Jan 2017 12:41:06 GMT
1 < Server: Werkzeug/0.10.4 Python/2.7.12
1 < x-openstack-request-id: req-2eb2f3d4-e518-4f67-8aa7-62a1012a76b9
[{"count": 30, "duration_start": "2017-01-27T12:35:57.736000", "min": 14.89424786133495, "duration_end": "2017-01-27T12:40:47.764000", "max": 15.131249975411718, "sum": 449.9349666470593, "period": 300, "period_end": "2017-01-27T12:40:55.426000", "duration": 290.028, "period_start": "2017-01-27T12:35:55.426000", "avg": 14.997832221568643, "groupby": null, "unit": "connections/s"}, {"count": 1, "duration_start": "2017-01-27T12:40:57.772000", "min": 14.988076984758623, "duration_end": "2017-01-27T12:40:57.772000", "max": 14.988076984758623, "sum": 14.988076984758623, "period": 300, "period_end": "2017-01-27T12:45:55.426000", "duration": 0.0, "period_start": "2017-01-27T12:40:55.426000", "avg": 14.988076984758623, "groupby": null, "unit": "connections/s"}]

13:41:06.444 [pool-1-thread-2] DEBUG [QueryCall:fetchStatistics:135] - got statistic: CeilometerStatistics{avg=14.997832221568643, count=30, duration=290.028, durationStart=Fri Jan 27 13:48:13 CET 2017, durationEnd=Fri Jan 27 13:53:31 CET 2017, min=14.89424786133495, max=15.131249975411718, sum=449.9349666470593, period=300, periodStart=Fri Jan 27 13:43:01 CET 2017, periodEnd=Fri Jan 27 13:48:01 CET 2017, unit=connections/s}
13:41:06.454 [pool-1-thread-2] DEBUG [QueryCall:fetchStatistics:135] - got statistic: CeilometerStatistics{avg=14.988076984758623, count=1, duration=0.0, durationStart=Fri Jan 27 13:53:49 CET 2017, durationEnd=Fri Jan 27 13:53:49 CET 2017, min=14.988076984758623, max=14.988076984758623, sum=14.988076984758623, period=300, periodStart=Fri Jan 27 13:48:01 CET 2017, periodEnd=Fri Jan 27 13:53:01 CET 2017, unit=connections/s}

As can be seen, the periodStart and periodEnd fields in the returned CeilometerStatistics objects are incorrect when we compare them to the period_start and period_end fields in the raw JSON response. For example, periodStart should be Fri Jan 27 13:35:55 CET 2017 in the first case but is Fri Jan 27 13:43:01 CET 2017.

I haven't been able to dig into the exact cause of the problem, but I've managed to find out that it is likely due to using Parser.asDate to parse these timestamps, since running:

Parser.asDate("2017-01-27T12:35:55.426000")

results in Fri Jan 27 13:43:01 CET 2017.

@ZsergUA
Copy link

ZsergUA commented Feb 3, 2017

AFAIK, the parser is expecting milliseconds after 'dot'. But in this response (this is common for openstack and python) 426000 are microseconds.
So it is treated as millis and we are getting 426000 ms = 426s = 7m 6s which you can see your result is bigger exactly on that value.
j.u.Date class knows nothing about microseconds and common java parsers, and even com.fasterxml.jackson.databind.util.StdDateFormat which is used to parse date don't take this into account.

Probably another DateFormat can be written and used as default to parse dates. Or jackson-databind can be fixed to address this issue.

@auhlig
Copy link
Member

auhlig commented Feb 3, 2017

Since you already digged in to the details, would you like to submit a fix @ZsergUA?
Would be much appreciated.

@pdube
Copy link
Contributor

pdube commented Jun 26, 2017

Hey guys, I created a PR to address this fix, could someone check it out? Thanks

pdube added a commit to cloudops/openstack4j-old that referenced this issue Jul 4, 2017
… sample-criteria-limit

# By Patrick Dube (8) and others
# Via GitHub (4) and Renat Akhmerov (1)
* 'master' of https://github.com/ContainX/openstack4j:
  Mistral: add cron trigger endpoint
  Mistral: add workflow environment endpoint
  Added a couple tests
  Collapsed if statement
  Using a StdDeserializer instead of processing it in the class. Reverting changes to groupby.
  Fixed decimals for period as well
  Modifying spacing
  Adding groupby field to Statistics
  Reverted change for period
  Mistral: add action execution endpoint
  Mistral: add task execution endpoint
  added display_name and display_description properties to maintain backwards compatibility with deprecated v1 API
  git ignore intelliJ generated project files
  ContainX#928 Ignoring microseconds in Date processing
  Renamed CinderVolume properties to be compatible with block storage API v2, fixed broken VolumeTests

Conflicts:
	core/src/main/java/org/openstack4j/model/telemetry/SampleCriteria.java
	core/src/main/java/org/openstack4j/openstack/telemetry/internal/MeterServiceImpl.java
pdube added a commit to cloudops/openstack4j-old that referenced this issue Jul 10, 2017
… dist-management

# By Patrick Dube (10) and others
# Via GitHub (4) and others
* 'master' of https://github.com/ContainX/openstack4j:
  Set the old deprecated fields in the builder to support cinder API v1 and set in the serialized json request during volume creation
  Fixed tests
  Added tests
  Mistral: add cron trigger endpoint
  Mistral: add workflow environment endpoint
  Added a couple tests
  Collapsed if statement
  Using a StdDeserializer instead of processing it in the class. Reverting changes to groupby.
  Fixed decimals for period as well
  Modifying spacing
  Adding groupby field to Statistics
  Reverted change for period
  ContainX#928 Ignoring microseconds in Date processing
@pjdarton
Copy link
Contributor

FYI this issue affects authentication as well (seen in 3.0.3).
e.g. The server returned "expires_at": "2017-08-30T13:46:51.480655Z" and this resulted in a java.util.Date of "Wed Aug 30 14:54:51 BST 2017".
i.e. 13:46:51 + 480655 milliseconds.

When combined with an Openstack service that always zeros the sub-second part of the issued_at (my server told me "issued_at": "2017-08-30T12:46:51.000000Z" in this case) this results in the expected login duration being 8 minutes (480655ms ~= 8 minutes) longer than the server permitted, meaning that my code (the Jenkins openstack-cloud plugin) trying use the token past its expiry time.

I think what's needed is some form of intelligent parsing so we treat the first digit in the sub-second field (if present) as hundreds of milliseconds, the second digit as tens of milliseconds, the third as milliseconds and ignore any extra. i.e. just read it to millisecond precision regardless of the precision it was given in.

@olivergondza
Copy link
Collaborator

This specific issues and the attached PR refers to ceilometer endpoint specifically. With @pjdarton, we observe that on Keystone tokens. Should it be tracked separately or will the work continue here?

@ZsergUA
Copy link

ZsergUA commented Jan 9, 2019

Looks like it is fixed in new jackson starting from 2.9.0 version

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

6 participants