Skip to content
This repository has been archived by the owner on Aug 13, 2019. It is now read-only.

Compaction with downsampling #56

Closed
drscre opened this issue Apr 23, 2017 · 3 comments
Closed

Compaction with downsampling #56

drscre opened this issue Apr 23, 2017 · 3 comments

Comments

@drscre
Copy link

drscre commented Apr 23, 2017

Prometheus currently has no downsampling support. It can be achived via federation, but it's way too messy.

Maybe now it is possible to integrate downsampling support into compaction process?

Another usefull feature would be to have different ttl for different metrics.
For example in our setup, a lot of metrics are aggregated via recording rules, and after "recording" they are never queried again.

@fabxc
Copy link
Contributor

fabxc commented Apr 24, 2017

The new storage certainly makes it easy to add such things to the compaction process. Same with dynamic retention policies. Both have been thought about and are somewhat semi-planned either as a core feature or potentially by an external process.

On requirement for downsampling to work properly is staleness handling, which we luckily started working on just recently.

@anarcat
Copy link

anarcat commented Jan 11, 2018

i heard there was progress about downsampling on the master branch ... can anyone clarify what the current state of affairs is here?

@krasi-georgiev
Copy link
Contributor

closing in favour of the more recent discussion in #313

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants