Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing kubelet_running_pod_count in 1.7.5 #3181

Closed
ericuldall opened this Issue Sep 15, 2017 · 2 comments

Comments

Projects
None yet
1 participant
@ericuldall
Copy link

ericuldall commented Sep 15, 2017

What did you do?
Upgraded my GCE cluster to 1.7.5

What did you expect to see?
I expected my prometheus metrics to keep reporting as before

What did you see instead? Under which circumstances?
No longer getting kubelet_running_pod_count metrics
I believe this may be affecting other/all kubelet metrics, but I only really use the running pod count as of now

Environment

Running in docker on Google Container Engine

  • Prometheus version:

    prometheus (version=1.4.1, branch=master, revision=2a89e8733f240d3cd57a6520b52c36ac4744ce12)"

  • Prometheus configuration file:

apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-config
data:
  prometheus.yml: |-
    global:
      scrape_interval: 15s
    scrape_configs:
    - job_name: 'kubernetes-node-exporter'
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      kubernetes_sd_configs:
      - role: node
      relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
      - source_labels: [__meta_kubernetes_role]
        action: replace
        target_label: kubernetes_role
      - source_labels: [__address__]
        regex: '(.*):10250'
        replacement: '${1}:9100'
        target_label: __address__
    - job_name: 'kubernetes-cadvisor'
      scheme: https

      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token

      kubernetes_sd_configs:
      - role: node

      relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
      - target_label: __address__
        replacement: kubernetes.default.svc:443
      - source_labels: [__meta_kubernetes_node_name]
        regex: (.+)
        target_label: __metrics_path__
        replacement: /api/v1/nodes/${1}:4194/proxy/metrics
    - job_name: 'kubernetes-apiservers'
      kubernetes_sd_configs:
      - role: endpoints
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
        action: keep
        regex: default;kubernetes;https
    - job_name: 'kubernetes-nodes'
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        insecure_skip_verify: true
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      kubernetes_sd_configs:
      - role: node
      relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
    - job_name: 'kubernetes-service-endpoints'
      kubernetes_sd_configs:
      - role: endpoints
      relabel_configs:
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
        action: replace
        target_label: __scheme__
        regex: (https?)
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
        action: replace
        target_label: __metrics_path__
        regex: (.+)
      - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
        action: replace
        target_label: __address__
        regex: (.+)(?::\d+);(\d+)
        replacement: $1:$2
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
      - source_labels: [__meta_kubernetes_service_namespace]
        action: replace
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_service_name]
        action: replace
        target_label: kubernetes_name
    - job_name: 'kubernetes-services'
      scheme: https
      metrics_path: /probe
      params:
        module: [http_2xx]
      kubernetes_sd_configs:
      - role: service
      relabel_configs:
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]
        action: keep
        regex: true
      - source_labels: [__address__]
        target_label: __param_target
      - target_label: __address__
        replacement: blackbox
      - source_labels: [__param_target]
        target_label: instance
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
      - source_labels: [__meta_kubernetes_service_namespace]
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_service_name]
        target_label: kubernetes_name
    - job_name: 'kubernetes-pods'
      scheme: https
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
        action: replace
        target_label: __metrics_path__
        regex: (.+)
      - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
        action: replace
        regex: (.+):(?:\d+);(\d+)
        replacement: ${1}:${2}
        target_label: __address__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - source_labels: [__meta_kubernetes_pod_namespace]
        action: replace
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_pod_name]
        action: replace
        target_label: kubernetes_pod_name
@ericuldall

This comment has been minimized.

Copy link
Author

ericuldall commented Sep 15, 2017

Sorry, just realized this was addressed in #2613
Was able to fix it by updating my config map kubernetes-node job from
https://raw.githubusercontent.com/prometheus/prometheus/master/documentation/examples/prometheus-kubernetes.yml

@ericuldall ericuldall closed this Sep 15, 2017

mcwienczek added a commit to mcwienczek/charts that referenced this issue Feb 10, 2018

Updated values.yaml cadvisor endpoint
Updated cadvisor scape endpoint so that it is compatible with Kubernetes 1.7.3+
See more here prometheus/prometheus#3181 (comment)

k8s-ci-robot added a commit to helm/charts that referenced this issue Feb 20, 2018

[Prometheus] Updated cAdvisor endpoint (#3684)
* Updated values.yaml cadvisor endpoint

Updated cadvisor scape endpoint so that it is compatible with Kubernetes 1.7.3+
See more here prometheus/prometheus#3181 (comment)

* Added extensive description about the problem with cadvisor

* Bumped chart version

* Removed trailing spaces

* Update Chart.yaml

ichtar pushed a commit to BestMile/charts that referenced this issue May 15, 2018

[Prometheus] Updated cAdvisor endpoint (helm#3684)
* Updated values.yaml cadvisor endpoint

Updated cadvisor scape endpoint so that it is compatible with Kubernetes 1.7.3+
See more here prometheus/prometheus#3181 (comment)

* Added extensive description about the problem with cadvisor

* Bumped chart version

* Removed trailing spaces

* Update Chart.yaml

voron added a commit to arilot/charts that referenced this issue Sep 5, 2018

[Prometheus] Updated cAdvisor endpoint (helm#3684)
* Updated values.yaml cadvisor endpoint

Updated cadvisor scape endpoint so that it is compatible with Kubernetes 1.7.3+
See more here prometheus/prometheus#3181 (comment)

* Added extensive description about the problem with cadvisor

* Bumped chart version

* Removed trailing spaces

* Update Chart.yaml

Signed-off-by: voron <av@arilot.com>
@lock

This comment has been minimized.

Copy link

lock bot commented Mar 23, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Mar 23, 2019

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.