Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some metrics are late on Victoria query endpoint #1484

Closed
xenofree opened this issue Jul 22, 2021 · 9 comments
Closed

Some metrics are late on Victoria query endpoint #1484

xenofree opened this issue Jul 22, 2021 · 9 comments
Labels
bug Something isn't working question The question issue

Comments

@xenofree
Copy link

Some metrics appear in prometheus, and /api/v1/export of victoria but appear late on /api/v1/query

curl -sg 'http://x.x.x.x:8428/api/v1/export?match[]=probe_duration_seconds{instance="https://www.google.fr"}&start=2021-07-22T13:50:00.000Z'

"values": [
0.166824193,
0.170407809
],
"timestamps": [
1626962464746,
1626962524746
]

curl -sg 'http://x.x.x.x:8428/api/v1/query?query=probe_duration_seconds{instance="https://www.google.fr"}'
"value": [
1626962531,
"0.166824193"
]

curl -sg 'http://x.x.x.x:9090/api/v1/query?query=probe_duration_seconds{instance="https://www.google.fr"}'
"value": [
1626962531.511,
"0.170407809"
]

date +%s
1626962531

Why timestamp on /api/v1/export are not the same as OS timestamp ?
Why metrics appears on /api/v1/export but not in /api/v1/query endpoint ?

When querying through grafana, last value are also wrong.

grafana_victoria_wrong_last_value

I tried with single-node Victoria and Cluster mode, but in both case i have thoses issues.

Version
docker exec victoriametrics ps
/victoria-metrics-prod -storageDataPath=/victoria-metrics-data -retentionPeriod=12 -dedup.minScrapeInterval=1m

docker exec victoriametrics /victoria-metrics-prod --version
victoria-metrics-20210715-111307-tags-v1.63.0-0-g61cc13c16

@f41gh7
Copy link
Collaborator

f41gh7 commented Jul 23, 2021

VictoriaMetrics return datapoints with delay at query endpoints, its controlled by flag:

-search.latencyOffset duration
      The time when data points become visible in query results after the collection. Too small value can result in incomplete last points for query results (default 30s)

You can try to adjust it to needed values.

Why timestamp on /api/v1/export are not the same as OS timestamp ?

timestamps uses values from metrics, if it was pushed by remote write protocol. If it was scrapped by VictoriaMetrics, it has to be time of scrapping.

@f41gh7 f41gh7 added the question The question issue label Jul 23, 2021
@xenofree
Copy link
Author

Thanks for your answer,

I set -search.latencyOffset to 1sec and it seems to be resolved when querying with curl.

But i still have the issue with grafana.
Last value are wrong and some metrics appears late.

last_value_wron,g
metrics late

@xenofree
Copy link
Author

Issue is visible when querying api/v1/query_range with curl.

END=$(date +%s)
START=$(($END-300))
curl -sg "http://x.x.x.x:8428/api/v1/query_range?query=count(up)&start=$START&end=$END&step=60" | jq
{
"status": "success",
"data": {
"resultType": "matrix",
"result": [
{
"metric": {},
"values": [
[
1627049460,
"1134"
],
[
1627049520,
"1134"
],
[
1627049580,
"1134"
],
[
1627049640,
"1134"
],
[
1627049700,
"1134"
],
[
1627049760,
"902"
]
]
}
]
}
}

@Zhouhenry
Copy link

@xenofree maybe you can have a check of time difference between the machine of you run vm with your pc.

@xenofree
Copy link
Author

xenofree commented Aug 1, 2021

Everything is synchronized with ntp.
I even have the same issue in localhost

@valyala
Copy link
Collaborator

valyala commented Aug 15, 2021

This may be related to time series staleness handling in VictoriaMetrics, which works differently than in Prometheus. See this issue. This should be fixed in the next release of VictoriaMetrics. See the umbrella issue for details.

@valyala valyala added the bug Something isn't working label Aug 15, 2021
@valyala
Copy link
Collaborator

valyala commented Aug 15, 2021

FYI, VictoriaMetrics and vmagent gained support for Prometheus staleness markers starting from the release v1.64.0.

@hagen1778
Copy link
Collaborator

See also #2061

@hagen1778
Copy link
Collaborator

Closing the issue as completed. Feel free to reopen if it still can be reproduced.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working question The question issue
Projects
None yet
Development

No branches or pull requests

5 participants