Skip to content

Cadvisor exposes redundant Prometheus metrics with different labels #2092

@tiwalter

Description

@tiwalter

I scraped metrics from Cadvisor with the following scrape job:

- job_name: kubernetes-cadvisor
  scrape_interval: 30s
  scrape_timeout: 10s
  metrics_path: /metrics
  scheme: https
  kubernetes_sd_configs:
  - api_server: null
    role: node
    namespaces:
      names: []
  bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
  tls_config:
    ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
    insecure_skip_verify: false
  relabel_configs:
  - separator: ;
    regex: __meta_kubernetes_node_label_(.+)
    replacement: $1
    action: labelmap
  - separator: ;
    regex: (.*)
    target_label: __address__
    replacement: kubernetes.default.svc:443
    action: replace
  - source_labels: [__meta_kubernetes_node_name]
    separator: ;
    regex: (.+)
    target_label: __metrics_path__
    replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
    action: replace

As i monitored e.g. the metric container_cpu_usage_seconds_total with the Query: container_cpu_usage_seconds_total{pod_name="your-pod-name"} on the UI, i recognized two redundant results.

The difference between the two redundant metrics are only the labels.
The "id" in the first metric is longer than in the second one. In addition to that, the second metric has no "container_name", "image" and "name" label.

The float values of the two versions have a minimal deviation. In my opinion, this is caused by the different scrape times (one metrics gets scraped milliseconds before the other).

Does anybody know, why there are exposed two redundant metrics? Or is it still a bug?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions