Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upWrong addresses discovered from k8s for daemonsets in host networking #2871
Comments
This comment has been minimized.
This comment has been minimized.
|
Just ran into this again and again with a daemonset. Possibly related? |
discordianfish
added
component/service discovery
kind/bug
labels
Jul 2, 2017
discordianfish
changed the title
Relabling results become stale
Wrong addresses discovered from k8s for daemonsets in host networking
Jul 2, 2017
This comment has been minimized.
This comment has been minimized.
|
Getting the node IP if the daemon set pods run in the host network namespace is intended behavior AFAIK. Only case where that's not working would be if nodes couldn't route to each other, or am I missing something? |
This comment has been minimized.
This comment has been minimized.
|
as I understand (from the linked issue) the underlying problem is that the |
matthiasr
referenced this issue
Jul 3, 2017
Closed
endpoints for daemonset in host network not ready and inconsistent with pod IPs #48396
brian-brazil
added
the
priority/P3
label
Jul 14, 2017
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
discordianfish commentedJun 23, 2017
•
edited
Somehow prometheus scrapes the wrong IPs for my node-exporter pods. They are using host networking and I'm running Prometheus 1.6.3.
Update: Looks like it's an upstream issue after all: __address__ as discovered is already wrong, I assumed it would use the pod IP. Filled kubernetes/kubernetes#48396 but keeping this open to track that.
It looks like the SD state in Prometheus itself is correct, both these commands:Prometheus:
Kubernetes:
Show this list:
But Prometheus tries to scrape node-exporter-2b14f for example on 10.32.130.3 (all others are inconsistent too) while it's actually running on __meta_kubernetes_pod_host_ip as correctly shown in target UI's tooltip:I'm using the prometheus/k8s sample config 1:1 for this job:
All these inconsistencies happened after I (accidentally, thanks firebase billing UI) shut down my cluster and restarted it. It appears that while the SD updates, the relabling rules don't seem to apply.