Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upRetention time configurable per series (metric, rule, ...). #1381
Comments
This comment has been minimized.
This comment has been minimized.
|
This is not something Prometheus supports directly at the moment and for the foreseeable future. The focus right now is on operational monitoring, i.e. the "here and now". You can get something like this by using a tiered system. The first-level Prometheus would scrape all the targets and compute the rules. A second-level Prometheus would federate from it, only fetching the result of these rules. It can do so at a lower resolution, but keep in mind that if you set the Additionally, the second-level Prometheus could use the (experimental) remote storage facilities to push these time series to OpenTSDB or InfluxDB as they are federated in. To query these you will need to use their own query mechanisms, there is no read-back support at the moment. |
This comment has been minimized.
This comment has been minimized.
|
The "5min-problem" is handled by #398. The planned grouping of rules will allow individual evaluation intervals for groups. So something like a "1 hour aggregate" can be configured in a meaningful way. The piece missing is retention time per series, which I will rename this bug into and make it a feature request. We discussed it several times. It's not a high priority right now, but certainly something we would consider. |
beorn7
added
the
feature-request
label
Feb 13, 2016
beorn7
changed the title
Graphite-like retention setup
Retention time configurable per series (metric, rule, ...).
Feb 13, 2016
fabxc
added
kind/enhancement
and removed
feature request
labels
Apr 28, 2016
This comment has been minimized.
This comment has been minimized.
klausenbusk
commented
Jul 24, 2016
|
A per job retention period is what I need for my use-case. I pull 4 metric from my solar panel every 30 second, and want to store them forever (so I can for example go 6 months back and see the production at that momemt) but I don't need that for all the other metric (like Prometheus metric). |
This comment has been minimized.
This comment has been minimized.
|
Prometheus is not intended for indefinite storage, you want #10. |
This comment has been minimized.
This comment has been minimized.
klausenbusk
commented
Jul 24, 2016
I see #10 make sense if you have a lot of time series, but OpenTSDB seems kind of overkill just to store 4 time series forever. Isn't it just a question of allowing people to set retention period to forever? or do you think people will "abuse" that? |
This comment has been minimized.
This comment has been minimized.
|
We make design decisions that presume that Promtheus data is ephemeral, and can be lost/blown away with no impact. |
This comment has been minimized.
This comment has been minimized.
onorua
commented
Mar 26, 2017
|
Coming here from google groups discussion about the same topic |
brian-brazil
added
priority/Pmaybe
component/local storage
labels
Jul 14, 2017
brian-brazil
referenced this issue
Sep 21, 2017
Closed
[Feature Request] Allow retention config per scrape job #3200
This comment has been minimized.
This comment has been minimized.
|
I plan to tackle this today. So essentially it would mean this, regularly calling the delete API and in the background cleaning up the tombstones. Where should this live is the question. My inclination is that we could leverage the delete API itself and then add a tombstone cleanup API, and add functionality to promtool to call the APIs regularly with the right matchers. Else, I would need to manipulate the blocks on disk with a separate tool which I must say, I'm not inclined to do. |
This comment has been minimized.
This comment has been minimized.
|
One alternative is to make it part of the tsdb tool and "mount" the tsdb
tool under "promtool tsdb", which has other nice benefits.
That would make the functionality usable outside of the Prometheus context.
Prometheus users would need to run 2 extra commands for disable/enable
compaction. Or just wrap those around it when calling via promtool.
…On Wed, Nov 22, 2017 at 6:29 AM Goutham Veeramachaneni < ***@***.***> wrote:
I plan to tackle this today. So essentially it would mean this, regularly
calling the delete API and in the background cleaning up the tombstones.
Where should this live is the question.
My inclination is that we could leverage the delete API itself and then
add a tombstone cleanup API, and add functionality to promtool to call the
APIs regularly with the right matchers.
Else, I would need to manipulate the blocks on disk with a separate tool
which I must say, I'm not inclined to do.
/cc @brian-brazil <https://github.com/brian-brazil> @fabxc
<https://github.com/fabxc> @juliusv <https://github.com/juliusv> @grobie
<https://github.com/grobie>
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1381 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AEuA8tV_IIlR7d8IDAAyISIhpKG06IHaks5s478rgaJpZM4HXqa7>
.
|
This comment has been minimized.
This comment has been minimized.
|
My concern there is the edge-cases, what if the request to restart compacting fails? While the tsdb tool makes perfect sense on static data, I think it would be cleaner if we could make it an API on top of For the Having it as an API also allows us to make it a feature of Prometheus if people care and Brian agrees ;) |
This comment has been minimized.
This comment has been minimized.
|
I wouldn't object to delete and force cleanup functionality being added to promtool. I have a general concern that users looking for this tend to be over-optimising and misunderstanding how Prometheus is intended to be used, such as the original post of this issue. I'd also have performance concerns with all this cleanup going on. |
krasi-georgiev
removed
the
component/local storage
label
Nov 9, 2018
This comment has been minimized.
This comment has been minimized.
|
don't think anything can be done on the tsdb side for this so removed the Doesn't seem there is a big demand for such a use case and since the issue is so old maybe should close it and revisit if it comes up again or if @taviLaies is still interested in this. |
taviLaies commentedFeb 10, 2016
Hello,
I'm evaluating prometheus as our telemetry platform and I'm looking to see if there's a way to set up graphite-like retention.
Let's assume I have a retention period of 15d in prometheus and I define aggregation rules that collapse the samples to 1h aggregates. Is there a way to keep this new metric around for more than 15 days?
If this is not possible, could you provide some insight on how you approach historical data in your systems?
Thank you