Update the size of PersistentVolume Prometheus uses #2314
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What this PR does / why we need it:
This PR changes the Prometheus retention to
1w
and disk size of PersistentVolumeHow many times does Prometheus pull within a week?
Prometheus tries to pull per 60 seconds so:
604800s(=1w) / 60 = 10080
The number of series is currently:
count({__name__=~“.+“})
57114How many samples should Prometheus hold?
57114 * 10080 = 575,709,120
How much space should we prepare?
On average, Prometheus uses only around 1-2 bytes per sample. All are 2bytes in worst case scenario.
575,709,120 * 2 = 1,151,418,240 bytes ≈ 1.072 GB
Plus there are others like indexes so we need about
1.2GB
at this point.It's enough to have 2Gi so far. But we can't expect the size of cluster stats because it's depending on the cluster Pipecd runs on. Besides, considering the possibility of getting more time series (as it's in the middle of improving), I settled on allocating 4Gi for now. For those who use a huge cluster that makes series cardinality higher, it's good to change the size through the Helm value.
Which issue(s) this PR fixes:
Fixes #
Does this PR introduce a user-facing change?: