Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
39 changes: 39 additions & 0 deletions docs/send-data/kubernetes/troubleshoot-collection.md
Original file line number Diff line number Diff line change
Expand Up @@ -522,6 +522,45 @@ kube-prometheus-stack:

where `metadata.name` is the value from Argo Application manifest.

### Missing metrics in dashboards

Dashboards can miss many metrics from the cluster and pod level. This can occur because the `kube-state-metrics` are not scraped. For example, the `prometheus-kube-state-metrics` ServiceMonitor resource does not have the label `release: prometheus`.

With the `prometheus` label added manually, the dashboards populate correctly and all data is pulled:

```yaml
serviceMonitorSelector:
matchLabels:
monitoring: prometheus

kube-state-metrics:
prometheus:
monitor:
enabled: true
additionalLabels:
monitoring: prometheus
```

The `release:` label value must match the release name of the `kube-prometheus-stack` Helm deployment. The default should be `release: kube-prometheus-stack`.

To get the release name:

```text
pod="$(kubectl get po -n monitoring | awk '/kube-state-metrics/{ print $1 }')"
kubectl get po -n monitoring "${pod}" -o yaml | grep release
```

Results:

```yaml
kube-state-metrics:
prometheus:
monitor:
enabled: true
additionalLabels:
release: kube-prometheus-stack
```

### Check metrics content

You can print metrics on stdout of metrics collector and metrics metadata, and validate if they are correct. It may happen that metrics are ingested, but with different metadata than you expect.
Expand Down