-
Notifications
You must be signed in to change notification settings - Fork 2.4k
Closed
Description
I scraped metrics from Cadvisor with the following scrape job:
- job_name: kubernetes-cadvisor
scrape_interval: 30s
scrape_timeout: 10s
metrics_path: /metrics
scheme: https
kubernetes_sd_configs:
- api_server: null
role: node
namespaces:
names: []
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecure_skip_verify: false
relabel_configs:
- separator: ;
regex: __meta_kubernetes_node_label_(.+)
replacement: $1
action: labelmap
- separator: ;
regex: (.*)
target_label: __address__
replacement: kubernetes.default.svc:443
action: replace
- source_labels: [__meta_kubernetes_node_name]
separator: ;
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
action: replace
As i monitored e.g. the metric container_cpu_usage_seconds_total with the Query: container_cpu_usage_seconds_total{pod_name="your-pod-name"} on the UI, i recognized two redundant results.
The difference between the two redundant metrics are only the labels.
The "id" in the first metric is longer than in the second one. In addition to that, the second metric has no "container_name", "image" and "name" label.
The float values of the two versions have a minimal deviation. In my opinion, this is caused by the different scrape times (one metrics gets scraped milliseconds before the other).
Does anybody know, why there are exposed two redundant metrics? Or is it still a bug?
MiticoBerna
Metadata
Metadata
Assignees
Labels
No labels