Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Monitoring labels aren't applied to rook-ceph-exporter #13774

Closed
abh opened this issue Feb 15, 2024 · 5 comments · Fixed by #13902
Closed

Monitoring labels aren't applied to rook-ceph-exporter #13774

abh opened this issue Feb 15, 2024 · 5 comments · Fixed by #13902
Assignees
Labels
bug good-first-issue Simple issues that are good for getting started with Rook.
Projects

Comments

@abh
Copy link

abh commented Feb 15, 2024

Is this a bug report or feature request?

  • Bug Report

Deviation from expected behavior:

The monitoring labels aren't applied to the rook-ceph-exporter service monitor.

Expected behavior:

The rook-ceph-exporter would have the same labels applied as the rook-ceph-mgr service monitor.

How to reproduce it (minimal and precise):

Add this to the cluster spec:

  labels:
    monitoring:
      prometheus: abc

and notice the label only being applied to two of the three service monitors.

@abh abh added the bug label Feb 15, 2024
@travisn travisn added the good-first-issue Simple issues that are good for getting started with Rook. label Feb 15, 2024
@travisn travisn added this to To do in v1.13 via automation Feb 15, 2024
@rspier
Copy link
Contributor

rspier commented Feb 17, 2024

Poking at the code today to see if I could fix this, looks like it's already possible.

spec:
  ... 
  labels:
    exporter:
      prometheus: perl

These labels are applied to both the rook-ceph-exporter Deployments and the ServiceMonitor.

if cephExporterLabels, ok := cephCluster.Spec.Labels["exporter"]; ok {
if managedBy, ok := cephExporterLabels["rook.io/managedBy"]; ok {

We could also apply the monitoring labels, but not sure it's worth it.

@rkachach
Copy link
Contributor

@abh @rspier is there something else we can do here? as @rspier commented labels can already be specified in the spec file.

@rkachach rkachach self-assigned this Feb 28, 2024
@rspier
Copy link
Contributor

rspier commented Mar 2, 2024

At a minimum, maybe an exporter: example could be added to the docs? (I can send a PR if you agree.)

https://rook.io/docs/rook/latest-release/Storage-Configuration/Monitoring/ceph-monitoring/?h=monitoring#using-custom-label-selectors-in-prometheus

I think our expectation was that the monitoring: stanza would also apply to the exporter.

(If there's ever an non-backwards compatible change to the CephCluster object, it might be interesting to move the monitoring labels into the spec.monitoring stanza or even embed a ServiceMonitorSpec there for override purposes. But the existing system works.)

rkachach added a commit to rkachach/rook that referenced this issue Mar 8, 2024
the labels listed under the 'monitoring' section are currently only
being applied to the rook-ceph-mgr ServiceMonitor. This change extends
those labels to also include the rook-ceph-exporter ServiceMonitor.

Fixes: rook#13774

Signed-off-by: Redouane Kachach <rkachach@redhat.com>
v1.13 automation moved this from To do to Done Mar 13, 2024
mergify bot pushed a commit that referenced this issue Mar 13, 2024
the labels listed under the 'monitoring' section are currently only
being applied to the rook-ceph-mgr ServiceMonitor. This change extends
those labels to also include the rook-ceph-exporter ServiceMonitor.

Fixes: #13774

Signed-off-by: Redouane Kachach <rkachach@redhat.com>
(cherry picked from commit 10e2670)
@abh
Copy link
Author

abh commented Mar 14, 2024

nice, thanks @rkachach & @travisn!

@rspier
Copy link
Contributor

rspier commented Mar 23, 2024

+1!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug good-first-issue Simple issues that are good for getting started with Rook.
Projects
v1.13
Done
Development

Successfully merging a pull request may close this issue.

4 participants