You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What happened:
We are planning to use thanos for long term storage and during the process we are facing few setbacks. As attached we are seeing 15GB RAM spike for thanos-compactor for 3.5lakh time series. We have a plan to implement compaction and downsampling for 8M time series which extrapolating would result in below figures
360GB RAM is too much actually for short spikes. Below is the configuration we are using, despite setting concurrency arguments we are still seeing memory spikes.
Please let us will this be fixed in future versions?
What you expected to happen:
We expected RAM utilization to be way lesser
How to reproduce it (as minimally and precisely as possible):
We are running two replicas of prometheus with thanos sidecar embedded to write to minio s3 object storage in the same cluster with the below configuration
What happened:
We are planning to use thanos for long term storage and during the process we are facing few setbacks. As attached we are seeing 15GB RAM spike for thanos-compactor for 3.5lakh time series. We have a plan to implement compaction and downsampling for 8M time series which extrapolating would result in below figures
15GB RAM - 3.5 lakh samples
360GB RAM - 8M samples
360GB RAM is too much actually for short spikes. Below is the configuration we are using, despite setting concurrency arguments we are still seeing memory spikes.
Please let us will this be fixed in future versions?
What you expected to happen:
We expected RAM utilization to be way lesser
How to reproduce it (as minimally and precisely as possible):
We are running two replicas of prometheus with thanos sidecar embedded to write to minio s3 object storage in the same cluster with the below configuration
Thanos, Prometheus and Golang version used:
Thanos: 0.34.1
prometheus: 2.49.2
golang: 1.21
The text was updated successfully, but these errors were encountered: