Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

prometheus-node-exporter pod stuck on pending #43

Closed
archen2019 opened this issue Jul 2, 2020 · 4 comments
Closed

prometheus-node-exporter pod stuck on pending #43

archen2019 opened this issue Jul 2, 2020 · 4 comments

Comments

@archen2019
Copy link
Contributor

When installing the chart after one release of the chart is already installed, the prometheus-node-exporter pod gets stuck on pending and never comes up.

The gg chart was installed first, and the ff chart second.

NAME                                                  READY   STATUS      RESTARTS   AGE
ff-grafana-c7874b854-ntgjf                            2/2     Running     4          9m51s
ff-grafana-db-hhmrv                                   0/1     Completed   4          9m51s
ff-kube-state-metrics-85944bdd8b-697lf                1/1     Running     0          9m51s
ff-prometheus-node-exporter-zsbvr                     0/1     Pending     0          9m51s
ff-prometheus-server-94ffbdb5c-jxszt                  2/2     Running     0          9m51s
ff-timescale-prometheus-5cff84c58f-txv6v              1/1     Running     4          9m51s
ff-timescale-prometheus-drop-chunk-1593707400-vtnb9   0/1     Completed   0          5m27s
ff-timescaledb-0                                      1/1     Running     0          9m50s
gg-grafana-7b8b96bbc8-q6mf8                           2/2     Running     2          17m
gg-grafana-db-wnxr6                                   0/1     Completed   0          17m
gg-kube-state-metrics-5dd59f65fb-v8h78                1/1     Running     0          17m
gg-prometheus-node-exporter-n8b6w                     1/1     Running     0          17m
gg-prometheus-server-b47f7c7dc-4ps97                  2/2     Running     0          17m
gg-timescale-prometheus-5b9455669f-mhtjj              1/1     Running     5          17m
gg-timescale-prometheus-drop-chunk-1593707400-cmcsp   0/1     Completed   0          5m27s
gg-timescaledb-0                                      1/1     Running     0          17m
@archen2019
Copy link
Contributor Author

Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  95s (x38 over 42m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.

@Harkishen-Singh
Copy link
Member

Since the port is occupied, could you try to manually install and set a custom port via ./node_exporter --web.listen-address=<custom_port>?

@VineethReddy02
Copy link
Contributor

The root cause of the issue is tobs install deploys node-exporter by default. The node-exporter is deployed as daemonset (i.e runs on every node in the cluster) node-exporters binds to the hostPort 9100 (i.e the underlying host port). Deploying tobs again or trying to deploy node-exporter manually again will cause this issue as node-exporter pod will be in the pending state due to no free port as the required port is already consumed by other node-exporter.

The use-case isn't valid as a k8s cluster will have a single node-exporter daemonset. It doesn't make sense to have multiple node-exporters on the same node.

@cevian @blagojts we can close this issue as I don't see a reason for multiple tobs installations on the same cluster which deploys redundant instances of node-exporter, kube-state-metrics, & promlens as single instances does the required job for the whole cluster.

@VineethReddy02
Copy link
Contributor

closing the issue. As deploying multiple node-exporters isn't a valid case.
Feel free to re-open if needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants