Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes and prometheus in different host #2430

Closed
prasenforu opened this Issue Feb 15, 2017 · 18 comments

Comments

Projects
None yet
9 participants
@prasenforu
Copy link

prasenforu commented Feb 15, 2017

My prometheus server running on different host.
I was using the Link here as guide to config..
https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml

Not sure where I can give my kubernetes API servers endpoints.

in above example.

@andrewhowdencom

This comment has been minimized.

Copy link

andrewhowdencom commented Feb 15, 2017

You might be able to point them at the same API endpoint as the endpoint kubectl is pointing at. However, I'm not sure discovery will work as it does in the above example; You'd need to be able to route between the networks (the one k8s is using to assign ips to pods, and the one your prometheus server is running on).

This might be a good candidate for federation. https://prometheus.io/docs/operating/federation/

Suggest moving this question to StackOverflow, as it's not strictly code related, but rather docs related.

@brancz

This comment has been minimized.

Copy link
Member

brancz commented Feb 15, 2017

See the api_server field here, by default Prometheus will attempt an in cluster connection to the apiserver using the available ServiceAccount. Your instance will need to have access to the network your Pods are using to communicate (flannel, or equivalent).

@prasenforu

This comment has been minimized.

Copy link
Author

prasenforu commented Mar 3, 2017

Thanks @brancz
Finally I am able to setup externally Prometheus and Grafana for monitoring outside Kubernetes.

All are OK but facings problem on "kubernetes-service-endpoints", saying its down and looks like its taking internal kubernetes IP.

My prometheus config file as follows ...

--
### #prometheus.yml
# my global config
global:
  scrape_interval:     15s # By default, scrape targets every 15 seconds.
  evaluation_interval: 15s # By default, scrape targets every 15 seconds.
  # scrape_timeout is set to the global default (10s).

  # Attach these labels to any time series or alerts when communicating with
  # external systems (federation, remote storage, Alertmanager).
  external_labels:
      monitor: 'codelab-monitor'

# Load and evaluate rules in this file every 'evaluation_interval' seconds.
rule_files:
  # - "first.rules"
  # - "second.rules"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:

  - job_name: 'Kubenetes-Hosts'
    static_configs:
      - targets: ['ip-10-52-2-56.ap-northeast-2.compute.internal:9100','ip-10-52-2-59.ap-northeast-2.compute.internal:9100','ip-10-52-2-54.ap-northeast-2.com
pute.internal:9100']
    relabel_configs:
       - source_labels: [ __address__ ]
         target_label: instance
         regex: ip-10-52-2-56.ap-northeast-2.compute.internal:9100
         replacement: Master
       - source_labels: [ __address__ ]
         target_label: instance
         regex: ip-10-52-2-59.ap-northeast-2.compute.internal:9100
         replacement: Node-1
       - source_labels: [ __address__ ]
         target_label: instance
         regex: ip-10-52-2-54.ap-northeast-2.compute.internal:9100
         replacement: Node-2

  - job_name: 'kubernetes-nodes'
    kubernetes_sd_configs:
    - api_server: ip-10-52-2-56.ap-northeast-2.compute.internal:8080
      role: node
    relabel_configs:
    - action: labelmap
      regex: __meta_kubernetes_node_label_(.+)
    - source_labels: [__address__]
      regex: '(.*):10250'
      replacement: '${1}:10255'
      target_label: __address__
    metric_relabel_configs:
    - source_labels: [io_kubernetes_container_name,container_name]
      action: replace
      regex: (.*);(.*)
      replacement: '${1}${2}'
      target_label: io_kubernetes_container_name
    - source_labels: [kubernetes_pod_name,pod_name]
      action: replace
      regex: (.*);(.*)
      replacement: '${1}${2}'
      target_label: kubernetes_pod_name
    - source_labels: [kubernetes_pod_name]
      action: replace
      target_label: io_kubernetes_pod_name

  - job_name: 'kubernetes-node-exporter'
    kubernetes_sd_configs:
    - api_server: ip-10-52-2-56.ap-northeast-2.compute.internal:8080
      role: node
    relabel_configs:
    - action: labelmap
      regex: __meta_kubernetes_node_label_(.+)
    - source_labels: [__meta_kubernetes_role]
      action: replace
      target_label: kubernetes_role
    - source_labels: [__address__]
      regex: '(.*):10250'
      replacement: '${1}:9100'
      target_label: __address__
    - source_labels: [__meta_kubernetes_node_label_kubernetes_io_hostname]
      target_label: __instance__
      # set "name" value to "job"
    - source_labels: [job]
      regex: 'kubernetes-(.*)'
      replacement: '${1}'
      target_label: name

  - job_name: 'kubernetes-service-endpoints'
    kubernetes_sd_configs:
    - api_server: ip-10-52-2-56.ap-northeast-2.compute.internal:8080
      role: endpoints
    relabel_configs:
    - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
      action: keep
      regex: true
    - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
      action: replace
      target_label: __scheme__
      regex: (https?)
    - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
      action: replace
      target_label: __metrics_path__
      regex: (.+)
    - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
      action: replace
      target_label: __address__
      regex: (.+)(?::\d+);(\d+)
      replacement: $1:$2
    - action: labelmap
      regex: __meta_kubernetes_service_label_(.+)
    - source_labels: [__meta_kubernetes_service_namespace]
      action: replace
      target_label: kubernetes_namespace
    - source_labels: [__meta_kubernetes_service_name]
      action: replace
      target_label: kubernetes_name

  - job_name: 'kubernetes-services'
    metrics_path: /probe
    params:
      module: [http_2xx]
    kubernetes_sd_configs:
    - api_server: ip-10-52-2-56.ap-northeast-2.compute.internal:8080
      role: service
    relabel_configs:
    - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]
      action: keep
      regex: true
    - source_labels: [__address__]
      target_label: __param_target
    - target_label: __address__
      replacement: blackbox
    - source_labels: [__param_target]
      target_label: instance
    - action: labelmap
      regex: __meta_kubernetes_service_label_(.+)
    - source_labels: [__meta_kubernetes_service_namespace]
      target_label: kubernetes_namespace
    - source_labels: [__meta_kubernetes_service_name]
      target_label: kubernetes_name

  - job_name: 'kubernetes-pods'
    kubernetes_sd_configs:
    - api_server: ip-10-52-2-56.ap-northeast-2.compute.internal:8080
      role: pod
    relabel_configs:
    - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
      action: keep
      regex: true
    - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
      action: replace
      target_label: __metrics_path__
      regex: (.+)
    - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
      action: replace
      regex: (.+):(?:\d+);(\d+)
      replacement: ${1}:${2}
      target_label: __address__
    - action: labelmap
      regex: __meta_kubernetes_pod_label_(.+)
    - source_labels: [__meta_kubernetes_pod_namespace]
      action: replace
      target_label: kubernetes_namespace
    - source_labels: [__meta_kubernetes_pod_name]
      action: replace
      target_label: kubernetes_pod_name

--

image

@brancz

This comment has been minimized.

Copy link
Member

brancz commented Mar 3, 2017

Your Prometheus instance needs access to the private network your Kubernetes cluster is using, for example if you are using flannel, then you need to add the machine Prometheus is running on to that flannel network. Once you can ping those private IPs from that machine this discovery should also work.

@prasenforu

This comment has been minimized.

Copy link
Author

prasenforu commented Mar 3, 2017

That mean one pod/container (Prometheus) need to run inside kuberetes cluster ?
If its yes!!
Then we have two Prometheus servers/pod. one is inside of kubernetes cluster and another is outside of cluster.

@brancz

This comment has been minimized.

Copy link
Member

brancz commented Mar 3, 2017

That would be a possibility having Prometheus running inside the cluster is certainly the most common practice when you monitor Kubernetes and things running on top of Kubernetes, but it would also work if your machine outside of the Kubernetes cluster is just part of the network to be able to route those IPs.

@prasenforu

This comment has been minimized.

Copy link
Author

prasenforu commented Mar 3, 2017

why keeping outside bcoz we are not monitoring only kubernetes also other things also monitoring.

if I run any kubernetes pod/container in side cluster what will be config file inside prometheus, only
target: localhost:9090 will work ?

Any configuration need to change in external prometheus ?

or any alternate approch ?

@brancz

This comment has been minimized.

Copy link
Member

brancz commented Mar 3, 2017

You can still monitor things outside of Kubernetes when using a Prometheus that is running inside of Kubernetes. Otherwise there is nothing wrong with running two instances of Prometheus, one for monitoring targets inside of Kubernetes and one for monitoring targets outside of Prometheus.

if I run any kubernetes pod/container in side cluster what will be config file inside prometheus, only
target: localhost:9090 will work ?

I don't understand this, could you please rephrase it?

@prasenforu

This comment has been minimized.

Copy link
Author

prasenforu commented Mar 6, 2017

@brancz

This comment has been minimized.

Copy link
Member

brancz commented Mar 7, 2017

Great! Can we close this issue here then?

@prasenforu

This comment has been minimized.

Copy link
Author

prasenforu commented Mar 7, 2017

@beorn7 beorn7 closed this Mar 7, 2017

@hiscal2015

This comment has been minimized.

Copy link

hiscal2015 commented Apr 1, 2017

@prasenforu I also need to run Prometheus outside of Kubernetes cluster, but how to deal with the token? I see you only defined api server address in the config file.

@prasenforu

This comment has been minimized.

Copy link
Author

prasenforu commented Apr 1, 2017

Suggest allways use inside kubernetes.
I was using external without security (non ssl) for that no token required.

@greenled

This comment has been minimized.

Copy link

greenled commented Nov 23, 2017

Also having troubles with token. @prasenforu did you find anything about it?

@costimuraru

This comment has been minimized.

Copy link

costimuraru commented Jun 25, 2018

We also have the Prometheus server outside of k8s. Any idea how to deal with the token and make it work?

@lnformer

This comment has been minimized.

Copy link

lnformer commented Aug 19, 2018

i think the documentation is laking of how to properly monitor externally kubernetes with prometheus.

@damien-roche

This comment has been minimized.

Copy link

damien-roche commented Aug 25, 2018

Utterly confused this hasn't been addressed yet. Everybody is monitoring Kube from inside Kube?

I have multiple Kube clusters and I have random nodes dotted around (apps, postgres, rabbitmq, etc). I have a central Prometheus server which pulls in metrics from my nodes no problem. I don't want another Prometheus instance chewing up ram on every Kube cluster; I already have an instance.

"Otherwise there is nothing wrong with running two instances of Prometheus, one for monitoring targets inside of Kubernetes and one for monitoring targets outside of Prometheus."

So if we have multiple Kube clusters we will have a Prometheus instance for each cluster? What if you have 5 clusters? Now I have to manage 5 different Prometheus instances each related to a different cluster? I just want centralised monitoring.

Can somebody in the know please document this. I don't understand how it isn't a common use-case.

EDIT. I have turned up some possible solutions.

An answer on SO suggests a federation setup. Prometheus inside your cluster, expose metrics from that to outside the cluster and your central/external Prometheus instance(s). (https://stackoverflow.com/a/47643005/419017)

There is also a project here which exposes cluster level metrics: https://github.com/kubernetes/kube-state-metrics

Hope that gives someone else something to run with.

@prasenforu

This comment has been minimized.

Copy link
Author

prasenforu commented Aug 26, 2018

But anyway we have to run Prometheus in each cluster. That I do not want . Basically I want to run Prometheus completely out of kubernetes cluster.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.