Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubernetes_sd_config pod: additional instance for port 80 created #2208

Closed
baracoder opened this Issue Nov 18, 2016 · 7 comments

Comments

Projects
None yet
2 participants
@baracoder
Copy link

baracoder commented Nov 18, 2016

What did you do?

create a pod with annotation

kind: ReplicationController
metadata:
  name: es
  namespace: es2
spec:
  replicas: 1
  template:
    metadata:
      annotations:
        prometheus.io/scrape: 'true'
        prometheus.io/port: '9108'
    spec:
      serviceAccount: elasticsearch2
      containers:
      - name: es-client-2
        image: quay.io/pires/docker-elasticsearch-kubernetes:2.3.4
        #...
        ports:
        - containerPort: 9200
          name: http
          protocol: TCP
      - name: prometheus-es-exporter
        image: crobox/elasticsearch-exporter
        args: [ elasticsearch_exporter  ]

What did you expect to see?

One instance for target kubernetes-pods with URL like http://10.2.17.4:9108/metrics

What did you see instead? Under which circumstances?

Two instances, one with port 80 which I never defined
http://10.2.17.4:80/metrics
http://10.2.17.4:9108/metrics

Environment

  • kubernetes 1.4.6
  • Prometheus version:
Build Information
Version	1.3.1
Revision	be476954e80349cb7ec3ba6a3247cd712189dfcb
Branch	master
BuildUser	root@37f0aa346b26
BuildDate	20161104-20:24:03
GoVersion	go1.7.3
  • Prometheus configuration file:

example config from the 1.3.1 tag https://github.com/prometheus/prometheus/blob/v1.3.1/documentation/examples/prometheus-kubernetes.yml but rmoved api_servers: ..

  - job_name: 'kubernetes-pods'
    kubernetes_sd_configs:
    - role: pod
    relabel_configs:
    - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
      action: keep
      regex: true
    - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
      action: replace
      target_label: __metrics_path__
      regex: (.+)
    - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
      action: replace
      regex: (.+):(?:\d+);(\d+)
      replacement: ${1}:${2}
      target_label: __address__
    - action: labelmap
      regex: __meta_kubernetes_pod_label_(.+)
    - source_labels: [__meta_kubernetes_pod_namespace]
      action: replace
      target_label: kubernetes_namespace
    - source_labels: [__meta_kubernetes_pod_name]
      action: replace
      target_label: kubernetes_pod_name
@brancz

This comment has been minimized.

Copy link
Member

brancz commented Nov 21, 2016

Version 1.3.0 of Prometheus had a breaking change in the kubernetes configuration, and there is no general consensus on a default configuration (there may not be such a thing). The problem with the previous approach was that there was only one target generated per pod, which made it difficult when you actually have two endpoints per pod (this happens often, even several kubernetes components require this).

In your case you probably want to use relabelling to drop or keep one or the other. What I have done and seen other people do is introduce a practice in your environment where you keep only those targets of a Pod that have a specific port name. To make a specific suggestion regarding your ReplicationController you would have to add a port to the ports list add give it the name "metrics" for example. Then in your relabel_configs use

- source_labels: [__meta_kubernetes_pod_container_port_name]
  action: keep
  regex: metrics
@baracoder

This comment has been minimized.

Copy link
Author

baracoder commented Nov 22, 2016

Ah, thank you.

So if you match for ^metrics.* multiple targets for pods would work. For exporters and applications which expose metrics on a different port at least.

Would be nice If kubernetes would allow to have the same port with different names per container, this would also work for applications which expose metrics on the same port as the service

        ports:
        - containerPort: 80
          name: http
        - containerPort: 80
          name: metrics

but kubernetes only keeps one name

@baracoder baracoder closed this Nov 22, 2016

@brancz

This comment has been minimized.

Copy link
Member

brancz commented Nov 22, 2016

@baracoder just out of curiosity what is your use case for using the Pod discovery? I have the feeling that 98% of the cases can be achieved cleaner with the Endpoints discovery, but very curious to hear otherwise and be convinced. With Endpoints discovery you are in the position to create a second Endpoints object with the same Port definition but different name.

@baracoder

This comment has been minimized.

Copy link
Author

baracoder commented Nov 22, 2016

@brancz my plan was to define the targets in the Deployment resource, since the configuration for the target is in there too and I have some pods for background work which are not exposed as services.
This way I would not need to touch the services or create services "just for metrics".

@brancz

This comment has been minimized.

Copy link
Member

brancz commented Nov 22, 2016

Nice. Thanks for the insight. Do you do any relabelling to get the job label to be more precise than "kubernetes-pods"?

@baracoder

This comment has been minimized.

Copy link
Author

baracoder commented Nov 22, 2016

since the pods have labels anyway, the labelmap seams enough

    - action: labelmap
      regex: __meta_kubernetes_pod_label_(.+)

It seams enough now but I don't know if it will hold in the future

@lock

This comment has been minimized.

Copy link

lock bot commented Mar 24, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Mar 24, 2019

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.