Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upIncorporate Data Resampling and Destruction Policy #54
Comments
This comment has been minimized.
This comment has been minimized.
|
We effectively have a destruction policy now, which defaults to ten days. Through the curator, it would be possible to implement a resampler easily. |
fabxc
removed this from the
Public Release Announcement milestone
Sep 21, 2015
This comment has been minimized.
This comment has been minimized.
|
Retention exists, downsampling is no longer a goal IIRC. At least I have a hard time seeing the benefit of downsampling at our level of compression. It could stretch the retention window a bit but the only thing actually solving the underlying problem is distributed remote storage from which Prometheus components can read. |
This comment has been minimized.
This comment has been minimized.
|
I think we're reasonably agreed on no downsampling, but we may allow more granular configuration of retention. |
This comment has been minimized.
This comment has been minimized.
|
That's sufficiently different that a new issue with the respective requirements should be filed. |
fabxc
closed this
Sep 21, 2015
simonpasquier
pushed a commit
to simonpasquier/prometheus
that referenced
this issue
Oct 12, 2017
This comment has been minimized.
This comment has been minimized.
lock
bot
commented
Mar 24, 2019
|
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
matttproud commentedJan 28, 2013
The datastore grows ad infinitum right now. We need a couple of capabilities: