Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upDynamic retention - data gets less granular with time #2485
Comments
This comment has been minimized.
This comment has been minimized.
|
There is no downsampling implemented right now, and it's not really planned either. You can emulate the setup by having a long-term Prometheus federating at larger intervals from your short-term Prometheus. Note that larger scrape intervals lead to worse compression, so the returns of downsampling are somewhat diminished. Finally, I recommend to discuss questions like this on the prometheus-users mailing list. |
beorn7
closed this
Mar 8, 2017
This comment has been minimized.
This comment has been minimized.
lock
bot
commented
Mar 23, 2019
|
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
lock
bot
locked and limited conversation to collaborators
Mar 23, 2019
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
waqark3389 commentedMar 8, 2017
I have an interesting question. If I keep 6 months worth of metrics, e.g. cpu usage. This fills up disk unnecessary disk space as I dont really want to see what the cpu usage was like 5 months ago at a particular minute. Would it be possible to consolidate or roll up all points in a series past a certain time period, roll them up by averaging daily and store just one value for that day instead of every scrape result.
Essentially data is deleted (for disk space and performance) but an average for a day is kept in.
I want to keep 6 months worth of data but dont really want to see a very granular view of what happened 6 months ago