Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add service annotations support (Enhancement proposal) #1547

Closed
jalberto opened this issue Jun 28, 2018 · 18 comments · Fixed by helm/charts#14543
Closed

Add service annotations support (Enhancement proposal) #1547

jalberto opened this issue Jun 28, 2018 · 18 comments · Fixed by helm/charts#14543
Labels

Comments

@jalberto
Copy link

In non-operator helm prometheus there is a interesting defautl configuration that allows to scrape any service or endpoints with specific annotations:

annotations:
  prometheus.io/port: "80"
  prometheus.io/scrape: "true"

I think a default 'ServiceMonitor' with similar configuration can be very useful (in particular for migrations)

@brancz
Copy link
Contributor

brancz commented Jun 28, 2018

The reason this is not used here is because the Prometheus upstream team (which we are all part of) advises against it, as it is very limited in what it can do. In fact it's the very reason the ServiceMonitor object exists.

A few examples of things that are impossible with this approach:

  • any targets with multiple ports
  • any targets that don't all have the exact same authentication, http and TLS configuration
  • selection is an all or nothing, not based on the well known label selection paradigm in Kubernetes

If you want to mimic a behavior close to this (although with labels rather than annotations, labels are better because they are actually indexed in Kubernetes), then you can create a ServiceMonitor that selects all Service objects with the label selector prometheus.io/scrape: "true". This will either take all ports or the defined port on the ServiceMonitor.

Alternatively you can use the additionalScrapeConfigs to write your own relabeling rules to select Services that way.

@jalberto
Copy link
Author

jalberto commented Jul 2, 2018

@brancz thanks for the reasoning, it makes sense and is a fair point. As an addition I still think a basic predefined SM for this task can be useful in the case of simplest metrics (the ones that doesn't require multiple ports or any kind of customization).

@brancz
Copy link
Contributor

brancz commented Jul 2, 2018

Yeah I think it's reasonable to document a ServiceMonitor that works for a broad common case.

@HaveFun83
Copy link
Contributor

It will be great if someone can document an example ServiceMonitor for this use case.

@sstarcher
Copy link

Certainly going forward ServiceMonitor is the preferred approach, but if I install a new service from say a helm chart that uses the old prometheus.io/scrape true it would be great if it just worked instead of the user having to do extra work on their end. I was hoping going from the Prometheus helm chart to the Prometheus operator helm chart would require less work.

@sstarcher
Copy link

sstarcher commented Dec 12, 2018

I tossed this into the helm values for the prometheus-operator helm chart. Please notice the node-exporter at the bottom if you don't drop it, it will be scraped twice as it still has the old annotation also. I have overridden the name so yours will be different.

prometheus:
  prometheusSpec:
    additionalScrapeConfigs:
        # Scrape config for service endpoints.
      #
      # The relabeling allows the actual service scrape endpoint to be configured
      # via the following annotations:
      #
      # * `prometheus.io/scrape`: Only scrape services that have a value of `true`
      # * `prometheus.io/scheme`: If the metrics endpoint is secured then you will need
      # to set this to `https` & most likely set the `tls_config` of the scrape config.
      # * `prometheus.io/path`: If the metrics path is not `/metrics` override this.
      # * `prometheus.io/port`: If the metrics are exposed on a different port to the
      # service then set this appropriately.
      - job_name: 'kubernetes-service-endpoints'

        kubernetes_sd_configs:
          - role: endpoints

        relabel_configs:
          - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
            action: keep
            regex: true
          - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
            action: replace
            target_label: __scheme__
            regex: (https?)
          - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
            action: replace
            target_label: __metrics_path__
            regex: (.+)
          - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
            action: replace
            target_label: __address__
            regex: ([^:]+)(?::\d+)?;(\d+)
            replacement: $1:$2
          - action: labelmap
            regex: __meta_kubernetes_service_label_(.+)
          - source_labels: [__meta_kubernetes_namespace]
            action: replace
            target_label: kubernetes_namespace
          - source_labels: [__meta_kubernetes_service_name]
            action: replace
            target_label: kubernetes_name
          - source_labels: [__meta_kubernetes_service_name]
            action: drop
            regex: 'node-exporter'

      # Example scrape config for pods
      #
      # The relabeling allows the actual pod scrape endpoint to be configured via the
      # following annotations:
      #
      # * `prometheus.io/scrape`: Only scrape pods that have a value of `true`
      # * `prometheus.io/path`: If the metrics path is not `/metrics` override this.
      # * `prometheus.io/port`: Scrape the pod on the indicated port instead of the default of `9102`.
      - job_name: 'kubernetes-pods'

        kubernetes_sd_configs:
          - role: pod

        relabel_configs:
          - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
            action: keep
            regex: true
          - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
            action: replace
            target_label: __metrics_path__
            regex: (.+)
          - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
            action: replace
            regex: ([^:]+)(?::\d+)?;(\d+)
            replacement: $1:$2
            target_label: __address__
          - action: labelmap
            regex: __meta_kubernetes_pod_label_(.+)
          - source_labels: [__meta_kubernetes_namespace]
            action: replace
            target_label: kubernetes_namespace
          - source_labels: [__meta_kubernetes_pod_name]
            action: replace
            target_label: kubernetes_pod_name

@tombh
Copy link

tombh commented Jun 17, 2019

The main point for me is that the whole add prometheus.io/scrape = true to your annotations advice is all over the web, so it was pretty confusing trying to figure out why this chart wasn't picking up the extra metrics that I was expecting it to. So my advice would be merely some documentation that acknowledged the prevailing advice elsewhere on the web and pointed users in the right direction.

@brancz
Copy link
Contributor

brancz commented Jun 20, 2019

So my advice would be merely some documentation that acknowledged the prevailing advice elsewhere on the web and pointed users in the right direction.

100% agreed. From the perspective of someone who recently ran into this. Where do you think we should put this?

@tombh
Copy link

tombh commented Jun 20, 2019

The README here points to the Helm Chart README, so there maybe? https://github.com/helm/charts/tree/master/stable/prometheus-operator#prometheus-operator

@brancz
Copy link
Contributor

brancz commented Jun 20, 2019

@vsliouniaev do you think you could add a note there? :)

@vsliouniaev
Copy link
Contributor

Sure, thing. Will do that

@stale
Copy link

stale bot commented Sep 2, 2019

This issue has been automatically marked as stale because it has not had any activity in last 60d. Thank you for your contributions.

@mtiller
Copy link

mtiller commented Jan 29, 2021

Just a comment from somebody who just stumbled upon this. As I understand it, the developers removed the discovery via annotations approach and replaced it with ServiceMonitor and PodMonitor because the annotation based discovery had limitations and these *Monitor resources represent a more powerful aproach.

Now, I'm new to Prometheus and (so far) have not figured out exactly how to use ServiceMonitor and PodMonitor so I cannot dispute their utility or power. However, I would like to point out the old maxim simple things should be simple and complex things should be possible. I feel perhaps something has been missed here because it seems to me having only *Monitor CRDs makes everything powerful but complex. Supporting both annotation based discovery and *Monitor resources seems to me (admittedly, a novice) to actually satisfy that maxim.

@simonpasquier
Copy link
Contributor

@mtiller Frederic already explained the reasoning at #1547 (comment)

Here are a few Prometheus issues describing some limitations with the annotation approach (and there are probably other if you search):
prometheus/prometheus#2353
prometheus/prometheus#3756

That being said, if you want to do annotation-based discovery, you're free to do it with additionalScrapeConfigs.

@mtiller
Copy link

mtiller commented Feb 1, 2021

@simonpasquier Just to be clear, I'm not saying annotations are sufficient for all cases (which seems to be the argument made in all the cases you cite). So I'm willing to stipulate that up front. I'm simply saying that requiring people to define ServiceMonitor or PodMonitor for all cases just raises the barrier to entry for simple configurations, i.e., IMHO simple things are not simple with this approach.

If that isn't important to the developers, so be it. I'm not complaining. I'm just trying to provide constructive feedback.

You are absolutely correct, it can be done with additionalScrapeConfigs and that's exactly what I've done. That does seems to be the easiest path to allowing annotation based scraping.

@paulfantom
Copy link
Member

This will be possible with the introduction of generic ScrapeConfig CRD (issue #2787). Currently, there is no plan to support annotation-based discovery without additional CR.

I am closing this in favor of #2787. If you think this issue is still relevant, please reopen it.

@ooraini
Copy link

ooraini commented Dec 29, 2021

Just came across this issue. Was something like prometheus.io/{PORT}/path: '/whatever' considered? It may solve the multiple ports issue. And for fast searching a label with the list of ports prometheus.io/ports: '9090,9092', which would also enable scraping for the ports(no need for prometheus.io/{PORT}/scrape: "true").

@SleepyBrett
Copy link

I also just ran into this and while i see the utility of the servicemonitor i constantly run into 'chicken and egg' problems. The servicemonitor is a crd, if that crd is not installed in the cluster then whatever you use to deploy manifests may just fail. (i have had this problem with people working on an application with metrics and a service monitor but are trying to deploy on a local or temp cluster without a monitoring stack). The workaround for now is to include the crd for those installs, but this is a hack.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.