New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] Application metrics monitoring of Kubernetes Pods #2841

Closed
boeboe opened this Issue Jun 13, 2017 · 3 comments

Comments

Projects
None yet
3 participants
@boeboe
Copy link

boeboe commented Jun 13, 2017

Hi all,

We are already using node-exporters to pull/collect docker/container metrics within our kubernetes environment. Next we would also like to be able to collect application metrics (provided by spring-actuator in our case, eg. http://pod-name:8080/prometheus). How is this typically done, keeping in mind we cannot use Kubernetes services as fixed DNS names (they are load balanced over the pods behind the service) and the fact that pods disappear and appear all the time, changing their name and hence addressability?

Our initial naive approach was the following (but this will not be sufficient since it is load-balanced by the service):

- job_name: 'example-service'
  scheme: http
  metrics_path: '/prometheus'
  static_configs:
   - targets: ['example-service:8080']

Any advice or examples would be really great.

Thanks a lot in advance,
Bart

@brian-brazil

This comment has been minimized.

Copy link
Member

brian-brazil commented Jun 13, 2017

Usage questions are best asked at https://groups.google.com/forum/#!forum/prometheus-users

Look at kubernetes service discovery.

@boeboe

This comment has been minimized.

Copy link

boeboe commented Jun 13, 2017

Thanks @brian-brazil , I'll have a look.

Found an example in the meanwhile:

# Example scrape config for pods
#
# The relabeling allows the actual pod scrape endpoint to be configured via the
# following annotations:
#
# * `prometheus.io/scrape`: Only scrape pods that have a value of `true`
# * `prometheus.io/path`: If the metrics path is not `/metrics` override this.
# * `prometheus.io/port`: Scrape the pod on the indicated port instead of the default of `9102`.
- job_name: 'kubernetes-pods'

  kubernetes_sd_configs:
  - role: pod

  relabel_configs:
  - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
	action: keep
	regex: true
  - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
	action: replace
	target_label: __metrics_path__
	regex: (.+)
  - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
	action: replace
	regex: ([^:]+)(?::\d+)?;(\d+)
	replacement: $1:$2
	target_label: __address__
  - action: labelmap
	regex: __meta_kubernetes_pod_label_(.+)
  - source_labels: [__meta_kubernetes_namespace]
	action: replace
	target_label: kubernetes_namespace
  - source_labels: [__meta_kubernetes_pod_name]
	action: replace
	target_label: kubernetes_pod_name
@bibinwilson

This comment has been minimized.

Copy link

bibinwilson commented Oct 13, 2017

Hi @boeboe ,
I am stuck in a similar situation. I tried the config above but it didn't work. Were you able to scrape the metrics from using the above config?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment