Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kuberenetes SD scrapes all service ports, not just the one annotated #2507

Closed
drewhemm opened this Issue Mar 17, 2017 · 9 comments

Comments

Projects
None yet
6 participants
@drewhemm
Copy link
Contributor

drewhemm commented Mar 17, 2017

I created an OpenShift Kubernetes service and annotated it as follows:

apiVersion: v1
kind: Service
metadata:
  annotations:
    prometheus.io/port: "9101"
    prometheus.io/scrape: "true"
  creationTimestamp: null
  labels:
    router: router
  name: router
spec:
  ports:
  - name: 80-tcp
    port: 80
    protocol: TCP
    targetPort: 80
  - name: 443-tcp
    port: 443
    protocol: TCP
    targetPort: 443
  - name: 1936-tcp
    port: 1936
    protocol: TCP
    targetPort: 1936
  selector:
    router: router
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

I expected to see in the Prometheus targets a single entry per scrape port per pod/container. Instead I see one target for each pod/container port, even though only 9101 is annotated. Even creating a separate service that lists only the scrape port:

apiVersion: v1
kind: Service
metadata:
  annotations:
    prometheus.io/port: "9101"
    prometheus.io/scrape: "true"
  creationTimestamp: null
  labels:
    router: router
  name: router-exporter
spec:
  ports:
  - name: exporter
    port: 9101
    protocol: TCP
    targetPort: 9101
  selector:
    router: router
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

Prometheus still tries to scrape from every port open to the pod/container.

I'm running OpenShift Enterprise 3.4 with Kubernetes version 1.4.0 and Prometheus version 1.5.2

  • System information:
    Linux 3.10.0-514.6.1.el7.x86_64 x86_64

  • Prometheus configuration file:

- job_name: 'openshift-service-endpoints'

    scheme: http

    tls_config:
      ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
    bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token

    kubernetes_sd_configs:
    - role: endpoints
      api_server: https://kubernetes.default.svc
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token

    relabel_configs:
    - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
      action: keep
      regex: true
    - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
      action: replace
      target_label: __scheme__
      regex: (https?)
    - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
      action: replace
      target_label: __metrics_path__
      regex: (.+)
    - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
      action: replace
      target_label: __address__
      regex: (.+)(?::\d+)?:(\d+)
      replacement: $1:$2
    - action: labelmap:
      regex: __meta_kubernetes_service_label_(.+)
    - source_labels: [__meta_kubernetes_namespace]
      action: replace
      target_label: kubernetes_namespace
    - source_labels: [__meta_kubernetes_service_name]
      action: replace
      target_label: kubernetes_name
@drewhemm

This comment has been minimized.

Copy link
Contributor Author

drewhemm commented Mar 17, 2017

Temporary workaround - set some other identifier on the exporter container, or the container/service port, then use an additional relabel config to filter out unwanted targets:

- source_labels: [__meta_kubernetes_pod_container_name]
  action: keep
  regex: prometheus.*

or...

- source_labels: [__meta_kubernetes_pod_container_port_name]
  action: keep
  regex: prometheus.*

or...

- source_labels: [__meta_kubernetes_pod_container_port_number]
  action: keep
  regex: 9\d{3}

I don't really like having to do this though, since it makes the service discovery less dynamic and requires additional contrived config on the Kubernetes objects involved...

The last example only works if you don't have any other non-metrics ports listening in the 9000-9999 range

@kfox1111

This comment has been minimized.

Copy link

kfox1111 commented Apr 25, 2017

I've hit a similar issue with sidecars.
say I have a container that is not prometheus aware, and a sidecar that adds a prometheus exporter.
If the first container doesn't even have a tcp port, prometheus still tries and scrape it as port 80 and always fails.

Can prometheus.io/ports attribution feature be added thats value is a string encoded version of a list "[9100,9101]" and if specified that way, all other ports are ignored by the default scraper? This will still allow the previous behavior to function while eliminating the failures.

@brian-brazil

This comment has been minimized.

Copy link
Member

brian-brazil commented Jul 14, 2017

@dimpavloff

This comment has been minimized.

Copy link

dimpavloff commented Aug 3, 2017

@drewhemm we had a similar issue but for pods. Changing the regex to ([^:]+)(?::\d+)?;(\d+) worked for us

@kfox1111

This comment has been minimized.

Copy link

kfox1111 commented Aug 3, 2017

which regex? would be nice if we could get it into github.com/kubernetes/charts/ stable/prometheus so it works right out of the box.

@dimpavloff

This comment has been minimized.

Copy link

dimpavloff commented Aug 4, 2017

@kfox1111 the regex is in the section rewriting the __address__ label, part of the kubernetes-pods scrape job:

          - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
            action: replace
            regex: ([^:]+)(?::\d+)?;(\d+)
            replacement: ${1}:${2}
            target_label: __address__

I haven't tested this with the __meta_kubernetes_service_annotation_prometheus_io_port for the kubernetes-endpoints job (because we haven't had an issue there) but I don't see a reason why it won't work there either

@lainekendall

This comment has been minimized.

Copy link

lainekendall commented Aug 17, 2018

I ran into this problem too. However, after reading the Prometheus Configuration docs it seems this is the intended behavior:
The endpoints role discovers targets from listed endpoints of a service. For each endpoint address one target is discovered per port. If the endpoint is backed by a pod, all additional container ports of the pod, not bound to an endpoint port, are discovered as targets as well.
from https://prometheus.io/docs/prometheus/latest/configuration/configuration/#%3Ckubernetes_sd_config%3E

@navi86

This comment has been minimized.

Copy link

navi86 commented Sep 19, 2018

is it possible to get all nodePort value from kubernetes service for scrapping automatically?
I can only get port of pods:(.

atomy pushed a commit to mlamm/amadeus-ws-client that referenced this issue Nov 26, 2018

VAN-866 | (fix) serve prometheus metrics via port 80 as the prometheu…
…s exporter uses all open ports instead of the configured one (prometheus/prometheus#2507)
@lock

This comment has been minimized.

Copy link

lock bot commented Mar 22, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Mar 22, 2019

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.