-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Grafana only shows raw data from Thanos #7296
Comments
@stalemate3 you are telling Grafana that you want the minimum step to be 1 hour. This means one datapoint per hour at minimum... at this step size Thanos decided that it will use raw data to answer the query. By default, Thanos does not reply to queries mixing data from different levels -- only a single is used to answer queries. So this is why you don't see the downsampled data there. You can do two things:
Personally I would go with option 1. |
I think there is some heuristic that max resolution is step/5 or something somewhere! 5h would be answered from 1h down samples data then |
I face the same problem with a similar setup (just running it on on-prem K8s and using the latest Thanos and the latest Bitnami Helm chart). For some reason i don't see in Grafana down-sampled data even if the query component has |
Honestly I didn't have the time to deal with this problem and I hoped that a version bump would magically fix it but since you used the latest versions I'm kind of lost what would be the fix for this. The workaround to set the |
We solved that by adding |
Thx @jaygridley |
This did the trick, huge thanks to you! |
Thanos, Prometheus, Grafana and Golang version used:
Thanos: Bitnami Thanos helm chart version 12.23.2 (application version: 0.34.0)
Prometheus: Kube-prometheus-stack helm chart version 56.4.0 (application version 2.49.1)
Grafana: Bitnami Grafana operator helm chart version 3.5.14 (application version 10.2.3)
Golang: go1.21.6
Object Storage Provider:
AWS S3
What happened:
I have Thanos installed on 3 AWS EKS clusters and currently I discovered an issue on all cluster where I'm trying to query data from Thanos in Grafana but it only shows raw data and not the down sampled.
The configurations and versions are all the same on all 3 k8s cluster.
I'm using the default resolutions:
compactor.retentionResolutionRaw 30d
compactor.retentionResolution5m 30d
compactor.retentionResolution1h 10y
But here's what I see on Grafana:
I'm quite sure about the fact that it doesn't use the down sampled data because last month I discovered an issue with the Compactor where it didn't compact the raw data because the default PVC size was not enough for it (8Gi). With this incorrect usage Thanos only had the raw data and on Grafana all data was shown, not like in the above picture shown. After increasing the PVC size for Compactor, it compacted almost 1 year of data successfully and it looks fine to me on the Bucketweb:
From these 2 pictures it is clear that the earliest point of data in Grafana matches the raw data Start Time.
What you expected to happen:
To see the 1h auto down sampled data on Grafana instead of the raw data which is stored by default for 30 days.
Noteworthy information:
TBH I'm not sure if it is Thanos or Grafana issue but given from the fact that the Grafana dashboards worked perfectly fine with the raw data and not with the down sampled data, my best guess is that there is an issue with Thanos. I'm happy to be proven wrong here, it is an issue which took me more time than it should already.
Full logs to relevant components:
Thanos Query config
Thanos Query Logs
Please let me know if you need any more important log, config or whatever useful information.
The text was updated successfully, but these errors were encountered: