Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update manifests to latest version of Prometheus Operator #16

Merged
merged 5 commits into from
Dec 10, 2016

Conversation

brancz
Copy link
Collaborator

@brancz brancz commented Dec 2, 2016

Tried out with a container image of HEAD of the Prometheus Operator.

List of changes divided into commits:

  • Set lower resource requests by default (simply for demo purposes, as this currently won't work on minikube for example with only 1Gb of memory)
  • Remove unnecessary prometheus.io/scrape annotation: (tl;dr) more confusing then helpful
  • Add Alertmanager support in TPRs and manifests themselves (recently added to Prometheus Operator)
  • Add kube-dns service for skydns and dnsmasq metrics to be collected. This change reflects work upstream and on bootkube.
  • Update node-exporter version

It would probably make sense to cut another release of the Prometheus Operator, just so we can at least partially version this until we get versioned user-guides.

@fabxc

These annotations made sense in pre v1.3.0 Prometheus releases, however,
with >=v1.3.0 and the Prometheus Operator these annotations are more
confusing then helpful.
The latest version of the Prometheus Operator requires Prometheus
>=v1.4.0 for the Alertmanger discovery feature.
The ports reflect the upstream kube-dns manifests of bootkube and
kubernetes/kubernetes.
@fabxc
Copy link
Contributor

fabxc commented Dec 2, 2016

👍 looks good.

Yes, we should cut another version to integrate into Tectonic etc.

@fabxc fabxc merged commit dda5b0c into prometheus-operator:master Dec 10, 2016
@brancz brancz deleted the default-resources branch December 10, 2016 18:45
@fabxc
Copy link
Contributor

fabxc commented Jun 2, 2017 via email

@julianvmodesto
Copy link

Got it – this is helpful and makes sense after consuming, thank you!

@cornelius-keller
Copy link

The problem I see here is that this annotations are fully self service and a well working way to communicate the metrics that should be scraped to prometheus. With ServiceMonitors this self service is not working anymore as they need to be in the monitoring namespace and I don't want to give everybody access to it.
As discussed here: prometheus-operator/prometheus-operator#813 .
So for me regarding ease of setup etc. prometheus-operator is a great step, regarding the self service kapabilities that my previeous setup had it is a step back.

@brancz
Copy link
Collaborator Author

brancz commented Mar 22, 2018

The ServiceMonitor has a namespaceselector for the services, so the only thing you to do is have a convention in your organization, which boils down to the same as the annotation. You simply need you organization to have a commonly named port and a label like "monitoring: true". The namespace selector can then select Services globally.

@brancz
Copy link
Collaborator Author

brancz commented Mar 22, 2018

However, we acknowledge the want that people have, and are trying to think of a way we can introduce something more broad without violating the concerns we currently have.

@StevenACoffman
Copy link
Contributor

We have an established convention, but also often have third party tools installed which have their own conflicting conventions. As a hybrid solution, we are applying a custom prometheus on top of the operator. Thanks again @solsson

@maver1ck
Copy link

maver1ck commented Sep 25, 2018

I'd like to add one more thing to this issue.
There is a lot of 3rd party tools and ready to use helm charts that are using annotations.
And we can't use it out of the box.

Example:
https://github.com/confluentinc/cp-helm-charts

@StevenACoffman
Copy link
Contributor

That is exactly our issue. 3rd party helm apps often have this need, which is in conflict with our using the operator at present.

@maver1ck
Copy link

maver1ck commented Sep 25, 2018

I found following solution.
In prometheus helm chart configuration add this:

  additionalScrapeConfigs:
  - job_name: 'kubernetes-pods'
    kubernetes_sd_configs:
    - role: pod
    relabel_configs:
    - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
      action: keep
      regex: true
    - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
      action: replace
      target_label: __metrics_path__
      regex: (.+)
    - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
      action: replace
      regex: ([^:]+)(?::\d+)?;(\d+)
      replacement: $1:$2
      target_label: __address__
    - action: labelmap
      regex: __meta_kubernetes_pod_label_(.+)
    - source_labels: [__meta_kubernetes_namespace]
      action: replace
      target_label: kubernetes_namespace
    - source_labels: [__meta_kubernetes_pod_name]
      action: replace
      target_label: kubernetes_pod_name

Then all pods with prometheus.io/scrape annotations are processed.
You don't need custom prometheus for that :)

@suppix
Copy link

suppix commented Sep 30, 2018

@maver1ck Could you please tell me where I should put this configuration ?

@maver1ck
Copy link

maver1ck commented Sep 30, 2018

You need to add this to helm chart configuration (values.yaml)
https://github.com/coreos/prometheus-operator/blob/master/helm/prometheus/values.yaml#L353

Unfortunately this option is undocumented.

@absolutejam
Copy link

I found following solution.
In prometheus helm chart configuration add this:

  additionalScrapeConfigs:
  - job_name: 'kubernetes-pods'
    kubernetes_sd_configs:
    - role: pod
    relabel_configs:
    - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
      action: keep
      regex: true
    - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
      action: replace
      target_label: __metrics_path__
      regex: (.+)
    - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
      action: replace
      regex: ([^:]+)(?::\d+)?;(\d+)
      replacement: $1:$2
      target_label: __address__
    - action: labelmap
      regex: __meta_kubernetes_pod_label_(.+)
    - source_labels: [__meta_kubernetes_namespace]
      action: replace
      target_label: kubernetes_namespace
    - source_labels: [__meta_kubernetes_pod_name]
      action: replace
      target_label: kubernetes_pod_name

Then all pods with prometheus.io/scrape annotations are processed.
You don't need custom prometheus for that :)

There is also some information available at https://github.com/coreos/prometheus-operator/blob/master/Documentation/additional-scrape-config.md

@jmvizcainoio
Copy link

you can also add this config to scrape services

      - job_name: 'kubernetes-service'
        kubernetes_sd_configs:
        - role: service
        relabel_configs:
        - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
          action: keep
          regex: true
        - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
          action: replace
          target_label: __metrics_path__
          regex: (.+)
        - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
          action: replace
          regex: ([^:]+)(?::\d+)?;(\d+)
          replacement: $1:$2
          target_label: __address__
        - action: labelmap
          regex: __meta_kubernetes_service_label_(.+)
        - source_labels: [__meta_kubernetes_namespace]
          action: replace
          target_label: kubernetes_namespace
        - source_labels: [__meta_kubernetes_service_name]
          action: replace
          target_label: kubernetes_service_name

knative-prow-robot referenced this pull request in knative/eventing Aug 27, 2019
* Make eventing controller observable
- Enable scrapping of eventing-controller
- Add eventing reconciler dasshboard

* Retain vendor license

* Removed eventing reconciler dashboard

* Make broker filter and ingress scrapeable

* Removed hard coded annotation for prometheus
- Refer https://github.com/coreos/kube-prometheus/pull/16\#issuecomment-305933103

* Made sources-controller observable

* Remove sources metrics service

* Fixed as per PR comments
aslom referenced this pull request in aslom/eventing Sep 5, 2019
* Make eventing controller observable
- Enable scrapping of eventing-controller
- Add eventing reconciler dasshboard

* Retain vendor license

* Removed eventing reconciler dashboard

* Make broker filter and ingress scrapeable

* Removed hard coded annotation for prometheus
- Refer https://github.com/coreos/kube-prometheus/pull/16\#issuecomment-305933103

* Made sources-controller observable

* Remove sources metrics service

* Fixed as per PR comments
@JulienBreux
Copy link

@bchanson
Copy link

bchanson commented Mar 6, 2021

If you're compiling the yamls using the jsonnet file. Follow the instructions mentioned earlier to create the additional scrape config secret.

I tried to patch the prometheus resource directly after building/applying the yamls via jsonnet. It didn't work. I had to add the additionalScrapeConfigs config to the jsonnet file.

This does not go in the _config+:: section. It goes in prometheus+:: like the serviceMonitor definitions.

    prometheus+:: {
      prometheus+: {
        spec+: {
          additionalScrapeConfigs+: {
            name: 'additional-scrape-configs',
            key: 'additonal-scrape-configs.yaml'
          },
        },
      },
    },

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.