Skip to content
This repository has been archived by the owner on Feb 22, 2022. It is now read-only.

[stable/prometheus] node_exporters are unreachable from the prometheus server #9791

Closed
4ydx opened this issue Dec 7, 2018 · 3 comments
Closed

Comments

@4ydx
Copy link

4ydx commented Dec 7, 2018

This happens in digital ocean's managed kubernetes environment. They use flannel I believe.

helm install --name prometheus-service stable/prometheus

Results in node_exporters having an ip address that matches the internal-ip address of the node it runs on as reported by kubectl get nodes -o wide.

kubectl get nodes -o wide

NAME     STATUS   ROLES    AGE    VERSION   INTERNAL-IP      
node-A   Ready    <none>   7d2h   v1.12.1   10.136.140.203   
node-B   Ready    <none>   7d2h   v1.12.1   10.136.143.172   
node-C   Ready    <none>   7d2h   v1.12.1   10.136.113.228  

Note that node-C is 10.136.113.228.

kubectl get pods -o wide | grep exporter

prometheus-service-node-exporter-x 1/1     Running   0      5m2s    10.136.140.203   node-A
prometheus-service-node-exporter-y 1/1     Running   0      5m2s    10.136.143.172   node-B
prometheus-service-node-exporter-z 1/1     Running   0      5m2s    10.136.113.228   node-C

prometheus-service-node-exporter-z is running on node-C.

prometheus-service-server-654cff9c44-a   10.244.43.5      node-C

The prometheus server is also on node-C.

Now the issue is that the prometheus server can only communicate with the node_exporter on node-C. Is this expected behavior? Is there something incorrect about how digital ocean networks between nodes or should the node_exporters be assigning themselves ip addresses that are reachable across the cluster rather than the node's internal-ip?

@4ydx
Copy link
Author

4ydx commented Dec 7, 2018

After looking into it further this, for better or worse, is simply how digital ocean networking works at the moment within clusters. For now I am just running the helm chart with node_exporters disabled:

helm install --name prometheus-service stable/prometheus --tls --set nodeExporter.enabled=false

@Genki-S
Copy link

Genki-S commented Apr 27, 2019

I got a workaround for this: Let your node-exporters have different IP than the host they are running on. I found that on Digital Ocean kubernetes environment:

  • Pod cannot access to Node IPs (except for the node the pod is running on)
  • Pod can access to Pod IPs (even though target pods are running on different nodes)

This can be done by setting hostNetwork: false I believe.

(note it has a caveat that your "node" metrics will have IP of a pod, which can change upon pod restarts)

@RdL87
Copy link

RdL87 commented Jan 29, 2020

I had the same problem and i solved it opening tcp ports 9100 and 9101 on AWS security groups.
I hope it can be helpful for you also.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants