Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discover k8s ingress #3071

Closed
arnisoph opened this Issue Aug 14, 2017 · 18 comments

Comments

Projects
None yet
6 participants
@arnisoph
Copy link

arnisoph commented Aug 14, 2017

It'd be great to have automatic discovery of ingress resources based on annotations like for services, pods, endpoints, etc. for blackbox checks and similar:

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: myservice
  annotations:
    prometheus.io/probe: 'true'
    prometheus.io/module: http_4xx_ssl
spec:
  tls:
...

Right now ingress doesn't seem to be implemented yet at all.

@matthiasr

This comment has been minimized.

Copy link
Contributor

matthiasr commented Aug 15, 2017

@arnisoph

This comment has been minimized.

Copy link
Author

arnisoph commented Aug 15, 2017

hmm, are you sure? does prometheus need to know how the hostnames/etc. are implemented? If this information isn't available via API yet, why not provide that via (ingress) annotations?

The following example is valid and contains the data you mentioned:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: prometheus-ingress
  namespace: kube-system
  annotations:
    ingress.kubernetes.io/auth-type: basic
    ingress.kubernetes.io/auth-secret: basic-auth
    ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
  tls:
    - hosts:
      - prometheus.main.test.example.com
      secretName: cluster-cert-tls
  rules:
    - host: prometheus.main.test.example.com
      http:
        paths:
        - path: /
          backend:
            serviceName: prometheus
            servicePort: 9090
@brancz

This comment has been minimized.

Copy link
Member

brancz commented Aug 15, 2017

As you already mentioned this specifically requires the blackbox exporter as well, which makes me feel like a higher level abstraction needs to come into play here. FWIW we have plans of integrating blackbox exporter support into the Prometheus Operator for exactly this reason.

@matthiasr

This comment has been minimized.

Copy link
Contributor

matthiasr commented Aug 16, 2017

Prometheus tries to be more specific than "this hostname, just pick any IP". You can still do it (just directly configure it as an endpoint).

However, as @brancz says this is stringing quite a lot of things together. Even if Prometheus were to ingest the ingress host names as a target, it also needs to know how you set up the blackbox exporter and a way to pass the module on to the exporter, so this spans many more layers than "just" adding a service discovery. It would also need to be dangerously complex so it works for everyone.

What you want to achieve is already possible with a little scripting and tying things together:

  1. use kubectl to get all ingresses, process, write a target list for file_sd
  2. configure a job to read from that, relabel to bend the scrape through your blackbox exporter, wherever it runs
  3. there is no 3.

Building this into Prometheus for all cases would need so many knobs that configuring it would be just as complex as integrating this way for your case, so I would rather not go down this route.

@arnisoph

This comment has been minimized.

Copy link
Author

arnisoph commented Aug 16, 2017

You can still do it (just directly configure it as an endpoint).

Would you be so kind to give me a code example?

Even if Prometheus were to ingest the ingress host names as a target, it also needs to know how you set up the blackbox exporter

what? what do you mean with "set up"?

and a way to pass the module on to the exporter,

relabelling, using an annotation as source label?

It would also need to be dangerously complex so it works for everyone.

I'd totally agree if you were right. For now I don't see that complexity. I might be overlooking something.

What you want to achieve is already possible with a little scripting and tying things together:

Actually I am looking for autodiscovery instead of static target configuration :)

@matthiasr

This comment has been minimized.

Copy link
Contributor

matthiasr commented Aug 18, 2017

Would you be so kind to give me a code example?

You can use a static config or a file sd config where the target in the file is just the domain in question.

what? what do you mean with "set up"?

Where is blackbox exporter running? what port is it on? which modules are configured?

Actually, going back, maybe I misread what you want. If the ingress discovery were to emit one target per rule, with appropriate meta-labels, you could then do all the relabelling on top. Nevertheless, in the absence of a built-in discovery you can run a little sidecar that queries the Kubernetes API (or calls out to kubectl) and emits these targets-per-rule. This could also help flesh out the interface (label names and such) and help come up with a good configuration example.

@discordianfish

This comment has been minimized.

Copy link
Member

discordianfish commented Aug 21, 2017

I really like the idea to expose the ingress. There wouldn't be much to set up, that could be done with relabling like it is for other 'roles'. This can both be used for configuring any kind of exporter that take URLs as parameters, not only the blackbox-exporter. You might even just want to use that to access something behind an internal LB for some weird enterprisy reasons.
I'm happy to provide a full example how configuration and relabling would look like if you're open to discuss that.

@grobie

This comment has been minimized.

Copy link
Member

grobie commented Aug 21, 2017

@matthiasr

This comment has been minimized.

Copy link
Contributor

matthiasr commented Aug 22, 2017

SGTM, I retract my objections.

@discordianfish

This comment has been minimized.

Copy link
Member

discordianfish commented Aug 22, 2017

User case

  • I want to monitor my ingress via a blackbox-exporter
  • I want to access a exporter via TLS verifying public chain (hench need to
    connect to FQDN)
  • I want to access a exporter which only can be reached via an ingress

Implementation

  • Add role = ingress to kubernetes SD
  • SD returns ingress host in address label
  • as well as additional labels:
    • ingress_name
    • ingress_schema
    • ingress_host
    • ingress_path
  • Since a ingress can contain multiple hosts, the SD may return multiple targets
    for a single ingress

Config example

scrape_configs:
  - job_name: 'kubernetes-ingresses'

    metrics_path: /probe
    params:
      module: [http_2xx]

    kubernetes_sd_configs:
      - role: ingress

    relabel_configs:
      - source_labels: [__meta_kubernetes_ingress_annotation_prometheus_io_probe]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_ingress_scheme,__address__,__meta_kubernetes_ingress_path]
        replacement: ${1}://${2}/${3}
        target_label: __param_target
      - target_label: __address__
        replacement: blackbox
      - source_labels: [__param_target]
        target_label: instance
      - action: labelmap
        regex: __meta_kubernetes_ingress_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_ingress_name]
        target_label: kubernetes_name
@discordianfish

This comment has been minimized.

Copy link
Member

discordianfish commented Aug 22, 2017

That's how I imaging it. Should be straight forward to implement. Will submit a PR if you agree this is the way to go.

@grobie

This comment has been minimized.

Copy link
Member

grobie commented Aug 22, 2017

I want to access a exporter which only can be reached via an ingress

What's a legitimate use case for that? I can't see one at the moment, Prometheus should be configured toscrape metrics with as few steps between itself and the target. Even if there is a valid edge case, I'd not mention this anywhere in the docs to avoid pointing people down the wrong path. We should even consider actively warning to use it instead of scraping targets directly.

@matthiasr

This comment has been minimized.

Copy link
Contributor

matthiasr commented Aug 23, 2017

@grobie for example, kube-state-metrics from a Prometheus that is not part of the overlay. Additionally this would avoid issues with many-to-many matching and aggregations if there is more than one instance of it. Not the way I would recommend doing it but I can see the need.

@discordianfish LGTM, thank you!

@discordianfish

This comment has been minimized.

Copy link
Member

discordianfish commented Aug 24, 2017

It's one of those things that doesn't make sense until you have to because the alternative would be a month-long cross team project to refactor your infrastructure. Either way, I believe even without this point exposing the ingresses make sense. I assume you agree?

@grobie

This comment has been minimized.

Copy link
Member

grobie commented Aug 24, 2017

@discordianfish

This comment has been minimized.

Copy link
Member

discordianfish commented Aug 24, 2017

That sounds reasonable

@brian-brazil

This comment has been minimized.

Copy link
Member

brian-brazil commented Aug 24, 2017

The service role is similar.

@lock

This comment has been minimized.

Copy link

lock bot commented Mar 23, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Mar 23, 2019

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.