Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

query: invalid rate/irate with deduplication for most recent values #2890

Open
creker opened this issue Jul 14, 2020 · 29 comments
Open

query: invalid rate/irate with deduplication for most recent values #2890

creker opened this issue Jul 14, 2020 · 29 comments
Labels
bug component: query dont-go-stale Label for important issues which tells the stalebot not to close them help wanted needs-more-info

Comments

@creker
Copy link

creker commented Jul 14, 2020

Thanos, Prometheus and Golang version used:
bitnami/thanos:0.13.0
prom/prometheus:v2.19.0

Prometheus in HA configuration, 2 instances. Single Thanos querier instance.

Object Storage Provider:
On premise minio deployment

What happened:
Executing rate or irate with deduplication sometimes results in the most recent value being invalid. Either it shoots up to very high or very low value. Turning off deduplication produces correct results from both Prometheus instances.

With deduplication, good result
with_dedup_good

With deduplication, bad result. Executing the same query eventually gives incorrect result like this
with_dedup_bad

Same query without deduplication always produces correct result
without_dedup

What you expected to happen:
Query with deduplication always giving correct result like this
with_dedup_good

How to reproduce it (as minimally and precisely as possible):
Prometheus in HA, any rate or irate query.

Full logs to relevant components:
Nothing that would correlate with queries.

@bwplotka
Copy link
Member

Thanks for reporting. We thought we found all those kind of issues, but there might be more. The last one was supposed to be fixed in 0.13.0. Can you double check if you run Thanos with the fix from #2401 included? (:

@bwplotka bwplotka added the bug label Jul 14, 2020
@bwplotka
Copy link
Member

Note that it's Querier version that matters for deduplication

@creker
Copy link
Author

creker commented Jul 14, 2020

Here's querier version report

thanos, version 0.13.0 (branch: HEAD, revision: adf6facb8d6bf44097aae084ec091ac3febd9eb8)
  build user:       root@ee9c796b3048
  build date:       20200622-09:49:32
  go version:       go1.14.2

@creker
Copy link
Author

creker commented Jul 14, 2020

I can try 0.14.0 if it contains any relevant fixes

@bwplotka
Copy link
Member

Please update but I think it should have the fix. If 0.14 won't work, then what would be awesome is to have exact chunks for that problematic period. You can obtain those by using following script: https://github.com/thanos-io/thanos/blob/40526f52f54d4501737e5246c0e71e56dd7e0b2d/scripts/insecure_grpcurl_series.sh against Querier gRPC API directly (: This will give us exact input that deduplication logic is using.

I think it has to do with some edge values.. 🤔

@creker
Copy link
Author

creker commented Jul 14, 2020

0.14 has the same problem. Had to use different metric to catch it. The problem persists for a few second so I tried to get correct and incorrect results.
bad.txt
good.txt

@csmarchbanks
Copy link

I was about to report this same issue!

What is happening in my system at least is that the initialPenalty is by default 5000ms, but I have scrape intervals in the 15s-30s range. The two HA Prometheus instances will commonly scrape the same target ~10s apart which means the deduplication logic will always switch series after the first sample causing extra data.

Here is a failing test that reproduces the behavior I see: master...csmarchbanks:unstable-queries

I pushed out a custom version of thanos that uses a 30s initial penalty and the problem has gone away for me. However, if someone had a 1m scrape interval a 30s initial penalty still would not be enough, and would be way too big for someone with a 5s scrape interval.

@creker
Copy link
Author

creker commented Jul 15, 2020

We have similar setup. Two Prometheus instances scraping the same targets with 10s interval.

@bwplotka
Copy link
Member

bwplotka commented Jul 16, 2020 via email

@krasi-georgiev
Copy link
Contributor

I opened a PR that uses the request resolution to handle cases like this one. I am still working on the tests, but so far it is looking good

#3010

@krasi-georgiev
Copy link
Contributor

krasi-georgiev commented Aug 27, 2020

update: resolution based dedup doesn't work with promql functions so reverted the PR.

In our case adjusting the default look back delta solved the problem so it seems that the problem here is different and caused by scrape time shifting.

@csmarchbanks in your case I think the main problem is that the scrapping between the different replicas is shifted by 30sec. That seems a lot if you manage to align it better problem should be solved.

@sevagh
Copy link
Contributor

sevagh commented Oct 6, 2020

I think I'm running into a similar issue on Thanos 0.14.0

The promql is sum(irate(my_app_counter[5m])), where my_app_counter is a counter.

Thanos with deduplication unchecked:
image
Thanos with deduplication checked shows a huge spike:
image
Prometheus replica 1:
image
Prometheus replica 2:
image

I have 2 replica Prometheus pollers scraping with a 60s interval. I tried the suggestion here to increase "initialPenalty" for a few tests, but even setting "initialPenalty = 60,000" didn't help: #2890 (comment)

It's possible they are out of sync - what's the best way to get them back in sync? Restart both pollers simultaneously and pray?

@krasi-georgiev
Copy link
Contributor

try adjusting the look-back delta - this should solve the problem.

@sevagh
Copy link
Contributor

sevagh commented Oct 7, 2020

Thanks - looks like I need to upgrade to 0.15.0+ to be able to use this new flag? What's a good value to pick with a scrape interval of 60s? Is the 5 minute default not big enough?

@sevagh
Copy link
Contributor

sevagh commented Oct 7, 2020

Looks like upgrading to 0.16.0-rc0 and modifying --query.lookback-delta isn't fixing the above situation.

@krasi-georgiev
Copy link
Contributor

hm, strange then I am out of ideas, sorry.

@stale
Copy link

stale bot commented Dec 8, 2020

Hello 👋 Looks like there was no activity on this issue for the last two months.
Do you mind updating us on the status? Is this still reproducible or needed? If yes, just comment on this PR or push a commit. Thanks! 🤗
If there will be no activity in the next two weeks, this issue will be closed (we can always reopen an issue if we need!). Alternatively, use remind command if you wish to be reminded at some point in future.

@stale stale bot added the stale label Dec 8, 2020
@bwplotka
Copy link
Member

bwplotka commented Dec 8, 2020 via email

@stale stale bot removed the stale label Dec 8, 2020
@stale
Copy link

stale bot commented Feb 6, 2021

Hello 👋 Looks like there was no activity on this issue for the last two months.
Do you mind updating us on the status? Is this still reproducible or needed? If yes, just comment on this PR or push a commit. Thanks! 🤗
If there will be no activity in the next two weeks, this issue will be closed (we can always reopen an issue if we need!). Alternatively, use remind command if you wish to be reminded at some point in future.

@stale stale bot added the stale label Feb 6, 2021
@bwplotka
Copy link
Member

bwplotka commented Feb 7, 2021

Still to investigate / try to repro

@stale stale bot removed the stale label Feb 7, 2021
@kakkoyun
Copy link
Member

Still valid and needs investigation.

@stale
Copy link

stale bot commented Jun 3, 2021

Hello 👋 Looks like there was no activity on this issue for the last two months.
Do you mind updating us on the status? Is this still reproducible or needed? If yes, just comment on this PR or push a commit. Thanks! 🤗
If there will be no activity in the next two weeks, this issue will be closed (we can always reopen an issue if we need!). Alternatively, use remind command if you wish to be reminded at some point in future.

@stale stale bot added the stale label Jun 3, 2021
@kakkoyun
Copy link
Member

kakkoyun commented Jun 3, 2021

Still valid.

@stale stale bot removed the stale label Jun 3, 2021
@stale
Copy link

stale bot commented Aug 2, 2021

Hello 👋 Looks like there was no activity on this issue for the last two months.
Do you mind updating us on the status? Is this still reproducible or needed? If yes, just comment on this PR or push a commit. Thanks! 🤗
If there will be no activity in the next two weeks, this issue will be closed (we can always reopen an issue if we need!). Alternatively, use remind command if you wish to be reminded at some point in future.

@stale stale bot added the stale label Aug 2, 2021
@stale
Copy link

stale bot commented Aug 17, 2021

Closing for now as promised, let us know if you need this to be reopened! 🤗

@stale stale bot closed this as completed Aug 17, 2021
@SuperQ
Copy link
Contributor

SuperQ commented May 2, 2023

I think this is still valid, not stale.

@rrondeau
Copy link

Just hit and discovered this issue with thanos 0.32.4

@Lord-Y
Copy link

Lord-Y commented Oct 11, 2023

@bwplotka this issue has been closed by the bot while the bug still valid. Can we have a feedback and reopen the issue?

@GiedriusS GiedriusS reopened this May 3, 2024
@stale stale bot removed the stale label May 3, 2024
@GiedriusS
Copy link
Member

Could someone help by uploading two blocks and then sharing what query to execute and on which timestamp to reproduce this?

@GiedriusS GiedriusS added the dont-go-stale Label for important issues which tells the stalebot not to close them label May 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug component: query dont-go-stale Label for important issues which tells the stalebot not to close them help wanted needs-more-info
Projects
None yet
Development

No branches or pull requests

10 participants