Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to change default servicemonitors interval #2176

Closed
yasharne opened this issue Dec 4, 2023 · 5 comments
Closed

How to change default servicemonitors interval #2176

yasharne opened this issue Dec 4, 2023 · 5 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@yasharne
Copy link

yasharne commented Dec 4, 2023

Hi
As the number of nodes on the openshift cluster increased, our Prometheus instances could not handle the amount of load, so I wanted to decrease the interval of service monitors, I tried to increase the interval of node-exporter's service monitor, but as it is managed by the operator, it will get reverted to the default value.
Is it possible to change the interval?

@a-thorat
Copy link

@yasharne
I was able to change it under node-exporter ServiceMonitor and not got override as well.
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
app.kubernetes.io/component: exporter
app.kubernetes.io/managed-by: cluster-monitoring-operator
app.kubernetes.io/name: node-exporter
app.kubernetes.io/part-of: openshift-monitoring
app.kubernetes.io/version: 1.6.1
monitoring.openshift.io/collection-profile: full
name: node-exporter
namespace: openshift-monitoring
spec:
endpoints:
- bearerTokenSecret:
key: ''
interval: 20s

Also note
interval
string
Interval at which metrics should be scraped If not specified Prometheus' global scrape interval is used.

Have you tried to change it CR Prometheus-k8s in opneshift-monitoring?
scrapeInterval
string
Interval between consecutive scrapes.
Default: "30s"

Please let me know if this works for you.

@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 14, 2024
@openshift-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci openshift-ci bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 13, 2024
@openshift-bot
Copy link
Contributor

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@openshift-ci openshift-ci bot closed this as completed May 14, 2024
Copy link
Contributor

openshift-ci bot commented May 14, 2024

@openshift-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

3 participants