Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing speaker monitor service label for proper metric labeling #2389

Open
8 tasks done
huseyinbabal opened this issue May 2, 2024 · 0 comments · May be fixed by #2390
Open
8 tasks done

Missing speaker monitor service label for proper metric labeling #2389

huseyinbabal opened this issue May 2, 2024 · 0 comments · May be fixed by #2390
Labels

Comments

@huseyinbabal
Copy link

MetalLB Version

v0.14.5

Deployment method

Charts

Main CNI

cilium

Kubernetes Version

v1.25.14

Cluster Distribution

rancher

Describe the bug

Currently metallb.name (e.g metallb) is used as metrics job label in prometheus rules, but the value of the job label in metallb metrics is rendered as metallb-speaker-monitor-service which causes prometheus rule to pick zero metric results during rule execution. Having metallb-speaker-monitor-service in the job label is expected since servicemonitor's jobLabel value is set app.kubernetes.io/name by default and there is no such a label in the metallb-speaker-monitor-service and for this reason it is defaulted to actual service name which is metallb-speaker-monitor-service.

Please see the jobLabel behavior here as quoted below;

jobLabel selects the label from the associated Kubernetes Service object which will be used as the job label for all metrics.

For example if jobLabel is set to foo and the Kubernetes Service object is labeled with foo: bar, then Prometheus adds the job="bar" label to all ingested metrics.

If the value of this field is empty or if the label doesn’t exist for the given Service, the job label of the metrics defaults to the name of the associated Kubernetes Service.

To Reproduce

  1. Deploy kube-prometheus-stack
  2. Deploy metallb chart with ... --set prometheus.serviceMonitor.enabled=true --set prometheus.serviceAccount=prometheus-monitoring-kube-prometheus --set prometheus.namespace=monitoring
  3. Wait for a while, and perform a query on Prometheus metallb_bgp_session_up{job="metallb"}
  4. Verify there is no result, since all the metrics contains {job="metallb-speaker-monitor-service"}

Expected Behavior

The metrics in Prometheus should be in a format as shown below

metallb_bgp_session_up{container="metrics", endpoint="metrics",job="metallb"}

not

metallb_bgp_session_up{container="metrics", endpoint="metrics",job="metallb-speaker-monitor-service"}

--

Additional Context

Since the helm chart is not working as expected with the default settings, the critical alert MetalLBBGPSessionDown might be missed

I've read and agree with the following

  • I've checked all open and closed issues and my request is not there.
  • I've checked all open and closed pull requests and my request is not there.

I've read and agree with the following

  • I've checked all open and closed issues and my issue is not there.
  • This bug is reproducible when deploying MetalLB from the main branch
  • I have read the troubleshooting guide and I am still not able to make it work
  • I checked the logs and MetalLB is not discarding the configuration as not valid
  • I enabled the debug logs, collected the information required from the cluster using the collect script and will attach them to the issue
  • I will provide the definition of my service and the related endpoint slices and attach them to this issue
@huseyinbabal huseyinbabal linked a pull request May 2, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant