Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[kube-prometheus-stack] ServiceMonitor of bitnami kafka not added to scrape configs #3487

Closed
sebastianlutter opened this issue Jun 12, 2023 · 4 comments
Labels
bug Something isn't working

Comments

@sebastianlutter
Copy link

sebastianlutter commented Jun 12, 2023

Describe the bug a clear and concise description of what the bug is.

ServiceMonitor created by bitnami kafka chart is not recognised by prometheus-operator. The kafka metrics endpoint is not added to the list of scraping targets in prometheus.

What's your helm version?

version.BuildInfo{Version:"v3.12.0", GitCommit:"c9f554d75773799f72ceef38c51210f1842a1dea", GitTreeState:"clean", GoVersion:"go1.20.3"}

What's your kubectl version?

Client Version: v1.27.2 Kustomize Version: v5.0.1 Server Version: v1.26.3

Which chart?

https://artifacthub.io/packages/helm/prometheus-community/kube-prometheus-stack

What's the chart version?

46.8.0

What happened?

  • Started kind cluster
  • Use namespace monitoring for both helm charts
  • Deployed kube-prometheus-stack with helm using this values.yaml
prometheusOperator:
  namespaces:
    releaseNamespace: true
    additional:
      - kube-system
      - monitoring

prometheus:
  enabled: true
  serviceMonitorSelector:
    matchLabels:
      prometheus: true
  serviceMonitorNamespaceSelector: {}
  serviceMonitorSelectorNilUsesHelmValues: true
  • Deployed kafkahelm chart using this values.yaml
replicaCount: 3
nodeSelector:
  node-type: worker
zookeeper:
  enabled: false
kraft:
  enabled: true
  processRoles: broker,controller
  controllerListenerNames: CONTROLLER
  clusterId: MjM4YTEyMzRmZjFkMTFlZG
auth:
  clientProtocol: plaintext
  externalClientProtocol: plaintext
allowPlaintextListener: true
serviceAccount:
  create: true
rbac:
  create: false
metrics:
  kafka:
    enabled: true
  jmx:
    enabled: false
  serviceMonitor:
    enabled: true
    labels:
      prometheus: "true"
  • ServiceMonitor instances created by the kube-prometheus-stack are found and added to scrape targets
  • ServiceMonitor instances created by the kafka chart are not found and not added to scrape targets

What you expected to happen?

The ServiceMonitor created by the kafkas should be discovered by the prometheus-operator and the kafka-bitnami-metrics service should be added as scrape target in the 'prometheus.yaml'

How to reproduce it?

Find a minimal example with all details that reproduces the issue here:

Enter the changed values of values.yaml?

prometheusOperator:
  namespaces:
    releaseNamespace: true
    additional:
      - kube-system
      - monitoring

prometheus:
  enabled: true
  serviceMonitorSelector:
    matchLabels:
      prometheus: true
  serviceMonitorNamespaceSelector: {}
  serviceMonitorSelectorNilUsesHelmValues: true

Enter the command that you execute and failing/misfunctioning.

it is a Runtime/Configuration problem, no single command issue

Anything else we need to know?

I tried to find out why the ServiceMonitor created by kafka helm chart is not found using the following resources:

Everything looks fine, cannot find a reason why it is not picked up or find any logs or events why the prometheus-operator ignores the ServiceMonitor created by kafka. Please help

@sebastianlutter sebastianlutter added the bug Something isn't working label Jun 12, 2023
@sebastianlutter
Copy link
Author

A solution is to explicit set a release: kube-prometheus-stack label for the kafka's chart ServiceMonitor instances. I guess this is the selector used by prometheus-operator to discover the other ServiceMonitor from the kube-prometheus-stack chart.

But in my understanding when I set prometheus.serviceMonitorSelectorNilUsesHelmValues to false and set prometheus.serviceMonitorSelector.matchLabels to prometheus: true in kube-promentheus-stack than it should find the ServiceMonitor instances of the kafka chart (metrics.serviceMonitor.labels set to prometheus: true as well). But it does not.

Do I misunderstand something?

@zeritti
Copy link
Contributor

zeritti commented Jun 14, 2023

Is it possible that you specified prometheus.serviceMonitorSelector instead of the needed prometheus.prometheusSpec.serviceMonitorSelector? The selector needs to be present in CR's prometheus.spec eventually.

@sebastianlutter
Copy link
Author

You are right, my spec is wrong. With this values.yaml for kube-prometheus-stack

prometheusOperator:
  namespaces:
    releaseNamespace: true
    additional:
      - kube-system
      - monitoring

prometheus:
  enabled: true
  prometheusSpec:
    serviceMonitorSelectorNilUsesHelmValues: false
    serviceMonitorSelector:
      matchLabels:
        release: my-own-release
    serviceMonitorNamespaceSelector:
      matchExpressions:
        - key: name
          operator: In
          values:
            - monitoring
            - kube-system

And this values.yaml for the bitnami kafka chart:

metrics:
  kafka:
    enabled: true
  jmx:
    enabled: true
    labels:
      app: kafka-jmx
      release: kube-prometheus-stack
  serviceMonitor:
    enabled: true
    labels:
      app: kafka
      release: my-own-release

Then I only have the ServiceMonitor from kafka in the scrape targets of prometheus. So the options are working now then as long as kafka and kube-prometheus-stack share the same namespace.

But I struggle to get it running in two different namespaces, but this is an issue related to create the right ClusterRole and ClusterRoleBinding to allow prometheus to discover ServiceMonitorfrom different namespaces.

Closing issue

@johnswarbrick-napier
Copy link
Contributor

Hi @sebastianlutter,

But I struggle to get it running in two different namespaces, but this is an issue related to create the right ClusterRole and ClusterRoleBinding to allow prometheus to discover ServiceMonitorfrom different namespaces.

I have the same problem where Prometheus cannot discover podMonitor or serviceMonitor from other namespaces. Are you on EKS too?

How did you fix it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants