Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

K8s Applications\Services monitoring not working in Prometheus #3538

Closed
varuntalus opened this Issue Dec 4, 2017 · 6 comments

Comments

Projects
None yet
2 participants
@varuntalus
Copy link

varuntalus commented Dec 4, 2017

What did you do?
I am trying to monitor my Kubernetes cluster components and the applications deployed in it. I am using prometheus 1.8.2 version.

What did you expect to see?
I was expecting several metrics which would let me know the health of my K8s cluster components as well the applications\PODs running in it.

What did you see instead? Under which circumstances?
After deploying Prometheus and run it as a POD in my K8s cluster, I could see the Target jobs got created for API Server, Nodes, CAdvisor, Node exporters. I could see jobs got created for few EndPoints as well. All these jobs are UP and running.

But the jobs which got created for my business applications\Services (running in K8s) they are all DOWN. Prometheus UI shows the below error for all these jobs:

"server returned HTTP status 404"

  • Prometheus version:

    1.8.2

  • Prometheus configuration file:

apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-server-conf
  labels:
    name: prometheus-server-conf
  namespace: default
data:
  prometheus.yml: |-
    global:
      scrape_interval: 5s
      evaluation_interval: 5s

    scrape_configs:
      - job_name: 'kubernetes-apiservers'

        kubernetes_sd_configs:
        - role: endpoints
        scheme: https

        tls_config:
          ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token

        relabel_configs:
        - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
          action: keep
          regex: default;kubernetes;https

      - job_name: 'kubernetes-nodes'

        scheme: https

        tls_config:
          ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token

        kubernetes_sd_configs:
        - role: node

        relabel_configs:
        - action: labelmap
          regex: __meta_kubernetes_node_label_(.+)
        - target_label: __address__
          replacement: kubernetes.default.svc:443
        - source_labels: [__meta_kubernetes_node_name]
          regex: (.+)
          target_label: __metrics_path__
          replacement: /api/v1/nodes/${1}/proxy/metrics


      - job_name: 'kubernetes-pods'

        kubernetes_sd_configs:
        - role: pod

        relabel_configs:
        - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
          action: keep
          regex: true
        - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
          action: replace
          target_label: __metrics_path__
          regex: (.+)
        - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
          action: replace
          regex: ([^:]+)(?::\d+)?;(\d+)
          replacement: $1:$2
          target_label: __address__
        - action: labelmap
          regex: __meta_kubernetes_pod_label_(.+)
        - source_labels: [__meta_kubernetes_namespace]
          action: replace
          target_label: kubernetes_namespace
        - source_labels: [__meta_kubernetes_pod_name]
          action: replace
          target_label: kubernetes_pod_name

      - job_name: 'kubernetes-cadvisor'

        scheme: https

        tls_config:
          ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
        kubernetes_sd_configs:
        - role: node

        relabel_configs:
        - action: labelmap
          regex: __meta_kubernetes_node_label_(.+)
        - target_label: __address__
          replacement: kubernetes.default.svc:443
        - source_labels: [__meta_kubernetes_node_name]
          regex: (.+)
          target_label: __metrics_path__
          replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor

      - job_name: 'kubernetes-service-endpoints'

        kubernetes_sd_configs:
        - role: endpoints

        relabel_configs:
        - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
          action: keep
          regex: true
        - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
          action: replace
          target_label: __scheme__
          regex: (https?)
        - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
          action: replace
          target_label: __metrics_path__
          regex: (.+)
        - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
          action: replace
          target_label: __address__
          regex: ([^:]+)(?::\d+)?;(\d+)
          replacement: $1:$2
        - action: labelmap
          regex: __meta_kubernetes_service_label_(.+)
        - source_labels: [__meta_kubernetes_namespace]
          action: replace
          target_label: kubernetes_namespace
        - source_labels: [__meta_kubernetes_service_name]
          action: replace
          target_label: kubernetes_name

I have used below annotations in my application deployment and Service YAML files.

    **_prometheus.io/scrape: "true"
    prometheus.io/probe: "true"
    prometheus.io/path: "/xyz/metrics"_**

Request you to please check this issue and help in resolving this.

Thansks!!!!!

@varuntalus

This comment has been minimized.

Copy link
Author

varuntalus commented Dec 4, 2017

/kind bug

@matthiasr

This comment has been minimized.

Copy link
Contributor

matthiasr commented Dec 4, 2017

I don't think we have enough information to conclude that this is a bug.

Do your service or pod have more than one port? When you curl that metrics path on an individual pod, what happens?

@varuntalus

This comment has been minimized.

Copy link
Author

varuntalus commented Dec 4, 2017

@matthiasr

No our service is exposed on a single port.

When I curl the metrics path (http://10.40.0.2:8080/xyz/metrics) I get the below error:

{"timestamp":1512381497403,"status":404,"error":"Not Found","message":"No message available","path":"/xyz/metrics"}****

@matthiasr

This comment has been minimized.

Copy link
Contributor

matthiasr commented Dec 4, 2017

That is exactly what Prometheus is also getting. Your application is not exposing anything under the path that you have configured Prometheus to scrape from – I'm afraid there is nothing Prometheus can do at this point; this is an issue with your application or the metrics path that you specify via the annotation.

Under which path does your application expose metrics in the Prometheus format?

I'm going to close this, since Prometheus is behaving correctly. If you have further questions, please take them to the appropriate channels.

@matthiasr matthiasr closed this Dec 4, 2017

@varuntalus

This comment has been minimized.

Copy link
Author

varuntalus commented Dec 4, 2017

@matthiasr
As I mentioned in my previous comment, I am using the below annotations:
prometheus.io/scrape: "true"
prometheus.io/probe: "true"
prometheus.io/path: "/xyz/metrics"

That is what I am not able to understand. Whatever Metrics path I gave in my service\application via annotation, same is coming in the Prometheus in Target page but there are no metrics available. What could be the reason for the same.

@lock

This comment has been minimized.

Copy link

lock bot commented Mar 23, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Mar 23, 2019

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.