-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[chart] Don't set prometheus.io pod annotations when ServiceMonitor is enabled #2601
Comments
Hi @z0rc.. Hi I would like to work on this. Just wanted to confirm so I wanted to check serviceMonitor is enabled or not from values if it is enabled then omit Prometheus annotations right? |
@tamboliasir1 you are correct, wrap prometheus.io annotations in if block and enable annotations when |
Got it.. Thank you @z0rc |
/assign |
Hi @z0rc .. Just wanted to confirm condition will look something like this right? |
@tamboliasir1 aside from github formatting, this looks okay. But it's better to create a PR and do a code review there. |
Hi @z0rc I have created a PR please review. Thanks |
Describe the bug
https://github.com/kubernetes-sigs/aws-load-balancer-controller/blob/v2.4.1/helm/aws-load-balancer-controller/templates/deployment.yaml#L25-L26 are set always. But with ServiceMonitor enabled, these labels still present. This results in double scraping, as prometheus discovers both pod annotations and ServiceMonitor as independent scraping targets.
Steps to reproduce
Enable ServiceMonitor via helm values, observe, that prometheus.io annotations still set on pods.
Expected outcome
When ServiceMonitor is enabled, prometheus.io pod annotations should be omitted.
Environment
Additional Context: None
The text was updated successfully, but these errors were encountered: