Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign up[v0.19.1]K8S SD failing if the cluster is in OpenStack #1680
Comments
This comment has been minimized.
This comment has been minimized.
|
Can you paste your configuration. In general for service-endpoint discovery in k8s nothing should have changed from 0.18.0. Even less so from 0.19.0 – see the diff between 0.19.0 and 0.19.1 here |
This comment has been minimized.
This comment has been minimized.
|
In general, I would expect Prometheus to be on the management place and not
use public IPs. Are you running Prometheus inside the cluster or outside?
What is the case for accessing public IPs?
I'm also curious about the "no route to host" message – since Prometheus
doesn't control the routing I don't see how this can have changed.
Finally, in 0.19.0 we added pod discovery and the SD puts out everything,
are you possibly not filtering on the `_ _meta_kubernetes_role` in the
relabelling configuration?
|
This comment has been minimized.
This comment has been minimized.
|
With OpenStack Networking; the IP isn't public perse ex: I'm pretty sure our config could use an update... I'll snip just the kubernetes section. We are running Prometheus, Blackbox-Exporter, Node-Exporter, Stats-D Exporter, AlertManager, and Grafana outside of the Kubernetes Cluster on a dedicated server; hence the BearerToken setting being flagged. if the Git-Markdown jacks the formatting on the config below; here is a gist of the same config.
|
This comment has been minimized.
This comment has been minimized.
|
Here is where I think this is having the problem. This currently uses the internal node IP if available, which shows that it expects Prometheus to be run in-cluster. Not sure of the best fix for this: make it configurable in SD or emit a target for each node IP & filter by relabelling? |
This comment has been minimized.
This comment has been minimized.
|
The additional IPs should be available as labels for relabelling. |
This comment has been minimized.
This comment has been minimized.
|
Ah OK so single target with labels for each node IP - great idea. |
jimmidyson
referenced this issue
Jun 7, 2016
Merged
Kubernetes SD: Add labels for all node addresses and discover node port #1712
This comment has been minimized.
This comment has been minimized.
|
#1712 makes all node IPs available from Kubernetes API server available for relabelling from meta labels |
brian-brazil
closed this
in
#1712
Jun 7, 2016
This comment has been minimized.
This comment has been minimized.
|
Just an update: after review meta label will be |
This comment has been minimized.
This comment has been minimized.
|
Awesome! Thanks everyone for the quick turnaround! Bryce Walter
|
This comment has been minimized.
This comment has been minimized.
lock
bot
commented
Mar 24, 2019
|
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
Krylon360 commentedMay 27, 2016
OpenStack utilizes 2 IP Addresses. A Private (Management IP) and a Public (Floating) IP.
with 0.19.1; Prometheus is only detecting the Private IP; thus it fails it's probe when hitting the Node Port and cAdvisor port.
If I hit that same path using the Public (FloatingIP) that's tied to that endpoint; the /metrics page pulls up just fine.
So; is this a Prometheus issue? or a Kubernetes issue?
I'm going through both repos to see where the problem might be coming from.
This worked fine using the 0.19.0 container; started failing with 0.19.1.