Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Node-exporter failed to start #14505

Closed
chrkl opened this issue Jun 7, 2022 · 2 comments
Closed

Node-exporter failed to start #14505

chrkl opened this issue Jun 7, 2022 · 2 comments
Labels
area/monitoring Issues or PRs related to the monitoring module (deprecated) kind/bug Categorizes issue or PR as related to a bug.

Comments

@chrkl
Copy link
Member

chrkl commented Jun 7, 2022

Description

User tied to deploy Kyma on k3d (Ubuntu 20.04)

$ sudo kyma version
Kyma CLI version: 2.2.0
Kyma cluster version: 2.2.0

Deploying the monitoring component failed. The node-exporter pods failed to start due to the following issue:

$ sudo kubectl describe pod -n kyma-system monitoring-prometheus-node-exporter-55zpf
Name:                 monitoring-prometheus-node-exporter-55zpf
Namespace:            kyma-system
Priority:             2100000
Priority Class Name:  kyma-system-priority
Node:                 k3d-kyma-server-0/172.18.0.3
Start Time:           Mon, 06 Jun 2022 21:22:46 +0200
Labels:               app=prometheus-node-exporter
                      [app.kubernetes.io/instance=monitoring](http://app.kubernetes.io/instance=monitoring)
                      [app.kubernetes.io/managed-by=Helm](http://app.kubernetes.io/managed-by=Helm)
                      [app.kubernetes.io/name=prometheus-node-exporter](http://app.kubernetes.io/name=prometheus-node-exporter)
                      chart=prometheus-node-exporter-2.5.0
                      controller-revision-hash=6f676b7bff
                      helm.sh/chart=prometheus-node-exporter-2.5.0
                      jobLabel=node-exporter
                      pod-template-generation=1
                      release=monitoring
Annotations:          [cluster-autoscaler.kubernetes.io/safe-to-evict](http://cluster-autoscaler.kubernetes.io/safe-to-evict): true
                      [kubernetes.io/limit-ranger](http://kubernetes.io/limit-ranger): LimitRanger plugin set: memory request for container node-exporter; memory limit for container node-exporter
Status:               Pending
IP:                   172.18.0.3
IPs:
  IP:           172.18.0.3
Controlled By:  DaemonSet/monitoring-prometheus-node-exporter
Containers:
  node-exporter:
    Container ID:  
    Image:         [eu.gcr.io/kyma-project/tpi/node-exporter:1.3.1-581a4014](http://eu.gcr.io/kyma-project/tpi/node-exporter:1.3.1-581a4014)
    Image ID:      
    Port:          9100/TCP
    Host Port:     9100/TCP
    Args:
      --path.procfs=/host/proc
      --path.sysfs=/host/sys
      --path.rootfs=/host/root
      --web.listen-address=$(HOST_IP):9100
      --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+)($|/)
      --collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$
    State:          Waiting
      Reason:       CreateContainerError
    Ready:          False
    Restart Count:  0
    Limits:
      memory:  96Mi
    Requests:
      memory:   32Mi
    Liveness:   http-get http://:9100/ delay=0s timeout=1s period=10s #success=1 #failure=3
    Readiness:  http-get http://:9100/ delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      HOST_IP:  0.0.0.0
    Mounts:
      /host/proc from proc (ro)
      /host/root from root (ro)
      /host/sys from sys (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  proc:
    Type:          HostPath (bare host directory volume)
    Path:          /proc
    HostPathType:  
  sys:
    Type:          HostPath (bare host directory volume)
    Path:          /sys
    HostPathType:  
  root:
    Type:          HostPath (bare host directory volume)
    Path:          /
    HostPathType:  
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       :NoSchedule op=Exists
                   [node.kubernetes.io/disk-pressure:NoSchedule](http://node.kubernetes.io/disk-pressure:NoSchedule) op=Exists
                   [node.kubernetes.io/memory-pressure:NoSchedule](http://node.kubernetes.io/memory-pressure:NoSchedule) op=Exists
                   [node.kubernetes.io/network-unavailable:NoSchedule](http://node.kubernetes.io/network-unavailable:NoSchedule) op=Exists
                   [node.kubernetes.io/not-ready:NoExecute](http://node.kubernetes.io/not-ready:NoExecute) op=Exists
                   [node.kubernetes.io/pid-pressure:NoSchedule](http://node.kubernetes.io/pid-pressure:NoSchedule) op=Exists
                   [node.kubernetes.io/unreachable:NoExecute](http://node.kubernetes.io/unreachable:NoExecute) op=Exists
                   [node.kubernetes.io/unschedulable:NoSchedule](http://node.kubernetes.io/unschedulable:NoSchedule) op=Exists
Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  12h                    default-scheduler  Successfully assigned kyma-system/monitoring-prometheus-node-exporter-55zpf to k3d-kyma-server-0
  Normal   Pulling    12h                    kubelet            Pulling image "[eu.gcr.io/kyma-project/tpi/node-exporter:1.3.1-581a4014](http://eu.gcr.io/kyma-project/tpi/node-exporter:1.3.1-581a4014)"
  Normal   Pulled     12h                    kubelet            Successfully pulled image "[eu.gcr.io/kyma-project/tpi/node-exporter:1.3.1-581a4014](http://eu.gcr.io/kyma-project/tpi/node-exporter:1.3.1-581a4014)" in 5.023317297s
  Warning  Failed     12h                    kubelet            Error: failed to generate container "df26ae47a70f457398c8b2f461b63aa901e6337feedcf39770f41975e4fc2622" spec: failed to generate spec: path "/" is mounted on "/" but it is not a shared or slave mount
  Warning  Failed     12h                    kubelet            Error: failed to generate container "0981b04bff3cdfe036f85903aca4ffc5737d5daa0c56c47939a39a03301f4902" spec: failed to generate spec: path "/" is mounted on "/" but it is not a shared or slave mount
  Warning  Failed     12h                    kubelet            Error: failed to generate container "1d14788186b05676fb3ba2b99fb8952d6424e38caa5d2b79946d3f9d0b37a569" spec: failed to generate spec: path "/" is mounted on "/" but it is not a shared or slave mount
  Warning  Failed     12h                    kubelet            Error: failed to generate container "0acd7ecee0706c315f9f817e2862c07d23ff0dba89df57458b7c0e1960585232" spec: failed to generate spec: path "/" is mounted on "/" but it is not a shared or slave mount
  Warning  Failed     12h                    kubelet            Error: failed to generate container "a617d91be89913fe5176caab86244ec694013fb149128d94f0d5f14a091c8302" spec: failed to generate spec: path "/" is mounted on "/" but it is not a shared or slave mount
  Warning  Failed     12h                    kubelet            Error: failed to generate container "91e3e41930d021d97364d64d5d3e737fcd8f0275c26fb305f332b51b760507b5" spec: failed to generate spec: path "/" is mounted on "/" but it is not a shared or slave mount
  Warning  Failed     12h                    kubelet            Error: failed to generate container "254c0bfba257fae79108da94a9fc1a93d189e758670da69ebdce759762993620" spec: failed to generate spec: path "/" is mounted on "/" but it is not a shared or slave mount
  Warning  Failed     12h                    kubelet            Error: failed to generate container "7281302129f1bf51f890786a940ed6bd008132960c501ba3854fa30328a87938" spec: failed to generate spec: path "/" is mounted on "/" but it is not a shared or slave mount
  Warning  Failed     12h                    kubelet            Error: failed to generate container "f33b61042eae9ca1194e6e2f5e9bcec250665cf2b44d689478e9f8d29fa44f72" spec: failed to generate spec: path "/" is mounted on "/" but it is not a shared or slave mount
  Warning  Failed     12h (x3 over 12h)      kubelet            (combined from similar events): Error: failed to generate container "1996b70c8ab221405658bdd00318c606da039c3fecbd71ea1564b4043b1851ef" spec: failed to generate spec: path "/" is mounted on "/" but it is not a shared or slave mount
  Normal   Pulled     3m47s (x254 over 12h)  kubelet            Container image "[eu.gcr.io/kyma-project/tpi/node-exporter:1.3.1-581a4014](http://eu.gcr.io/kyma-project/tpi/node-exporter:1.3.1-581a4014)" already present on machine
@chrkl chrkl added kind/bug Categorizes issue or PR as related to a bug. area/monitoring Issues or PRs related to the monitoring module (deprecated) labels Jun 7, 2022
@chrkl
Copy link
Member Author

chrkl commented Jun 7, 2022

Maybe related: prometheus-community/helm-charts#467

@valvolt
Copy link

valvolt commented Jun 8, 2022

I solved this by using the docker.io version of docker instead of the one coming with snap:

sudo snap remove --purge docker
sudo apt install docker.io

@chrkl chrkl closed this as completed Jun 8, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/monitoring Issues or PRs related to the monitoring module (deprecated) kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

2 participants