Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request] Allow retention config per scrape job #3200

Closed
d-shi opened this Issue Sep 21, 2017 · 4 comments

Comments

Projects
None yet
2 participants
@d-shi
Copy link

d-shi commented Sep 21, 2017

I searched existing issues and didn't see this anywhere. We would like to keep our metrics scraped from a dev kubernetes cluster for only maybe a day or a few days, since pod churn there is huge (we create upwards of 50k new pods per day, each with hundreds of time series associated with them). However, we would like to keep metrics from production clusters for much longer. We want to use the same Prometheus instance to hold all the metrics, and this instance has different scrape jobs set up for the different clusters. It would be nice to be able to configure a different retention policy per scrape job, similar to scrape_interval.

@brian-brazil

This comment has been minimized.

Copy link
Member

brian-brazil commented Sep 21, 2017

In this case you want two different Prometheus servers. In general dev and prod should have distinct monitoring setups, and usually you'd also want a Prometheus per k8 cluster.

@d-shi

This comment has been minimized.

Copy link
Author

d-shi commented Sep 21, 2017

In our current state we would like to not have to run multiple prometheus servers. Perhaps eventually we will split into multiple servers, but for now we would like to have all our metrics in one place. There are likely many other use cases where having different retentions would be helpful. Can you comment on whether or not you would consider implementing this?

@brian-brazil

This comment has been minimized.

Copy link
Member

brian-brazil commented Sep 21, 2017

Dupe of #1381, don't expect it anytime soon. The recommended architecture is as I described above.

@lock

This comment has been minimized.

Copy link

lock bot commented Mar 23, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Mar 23, 2019

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.