Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prometheus CrashLoopBackoff #2

Open
ztnel opened this issue May 7, 2023 · 1 comment
Open

Prometheus CrashLoopBackoff #2

ztnel opened this issue May 7, 2023 · 1 comment
Labels
defect Something isn't working

Comments

@ztnel
Copy link
Contributor

ztnel commented May 7, 2023

Description

After a 9 day test run of the monitoring cluster deployments the prometheus-k8s-0 pod went into CrashLoopBackOff.

Details

Prometheus crashed abruptly trying to write to invalid memory:

github.com/prometheus/prometheus/tsdb.(*memSeries).cutNewHeadChunk(0x8fb2840, 0xe1104a28, 0x187, 0x51de1b0, 0x1)
  /app/tsdb/head.go:1962 +0x24
github.com/prometheus/prometheus/tsdb.(*memSeries).append(0x8fb2840, 0xe1104a28, 0x187, 0x0, 0x0, 0x0, 0x0, 0x51de1b0, 0x1)
  /app/tsdb/head.go:2118 +0x3a4
github.com/prometheus/prometheus/tsdb.(*Head).processWALSamples(0x5d00000, 0xdfc45a00, 0x187, 0xdcca880, 0xdcca840, 0x0, 0x0)
  /app/tsdb/head.go:365 +0x284
github.com/prometheus/prometheus/tsdb.(*Head).loadWAL.func5(0x5d00000, 0x14a86038, 0x14a86040, 0xdcca880, 0xdcca840)
  /app/tsdb/head.go:459 +0x3c
created by github.com/prometheus/prometheus/tsdb.(*Head).loadWAL
  /app/tsdb/head.go:458 +0x268
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0xc pc=0x1532d88]

goroutine 297 [running]:
bufio.(*Writer).Available(...)
  /usr/local/go/src/bufio/bufio.go:608
github.com/prometheus/prometheus/tsdb/chunks.(*ChunkDiskMapper).WriteChunk(0x51de1b0, 0x71c0, 0x0, 0xe0a02338, 0x187, 0xe10d8b08, 0x187, 0x240cee0, 0x5e5ce20, 0x0, ...)
  /app/tsdb/chunks/head_chunks.go:252 +0x500
github.com/prometheus/prometheus/tsdb.(*memSeries).mmapCurrentHeadChunk(0x8fb1970, 0x51de1b0)
  /app/tsdb/head.go:1988 +0x6c
github.com/prometheus/prometheus/tsdb.(*memSeries).cutNewHeadChunk(0x8fb1970, 0xe1104a28, 0x187, 0x51de1b0, 0x1)
  /app/tsdb/head.go:1962 +0x24
github.com/prometheus/prometheus/tsdb.(*memSeries).append(0x8fb1970, 0xe1104a28, 0x187, 0x8cab4ba2, 0x3fded782, 0x0, 0x0, 0x51de1b0, 0x1)
  /app/tsdb/head.go:2118 +0x3a4
github.com/prometheus/prometheus/tsdb.(*Head).processWALSamples(0x5d00000, 0xdfc45a00, 0x187, 0xdcca780, 0xdcca740, 0x0, 0x0)
  /app/tsdb/head.go:365 +0x284
github.com/prometheus/prometheus/tsdb.(*Head).loadWAL.func5(0x5d00000, 0x14a86038, 0x14a86040, 0xdcca780, 0xdcca740)
  /app/tsdb/head.go:459 +0x3c
created by github.com/prometheus/prometheus/tsdb.(*Head).loadWAL
  /app/tsdb/head.go:458 +0x268

Full output from pod describe:

christiansargusingh  20:21:33 k3s/monitoring/manifests > kubectl describe pod prometheus-k8s-0 -n monitoring       
Name:         prometheus-k8s-0
Namespace:    monitoring
Priority:     0
Node:         master/192.168.2.111
Start Time:   Thu, 27 Apr 2023 03:17:09 -0400
Labels:       app=prometheus
              controller-revision-hash=prometheus-k8s-749f5b9588
              prometheus=k8s
              statefulset.kubernetes.io/pod-name=prometheus-k8s-0
Annotations:  <none>
Status:       Running
IP:           10.42.0.14
IPs:
  IP:           10.42.0.14
Controlled By:  StatefulSet/prometheus-k8s
Containers:
  prometheus:
    Container ID:  containerd://57c7b5fc9b085d75062e5638e0d54c94ae4f41928ffef688ff4038f1d2d39969
    Image:         prom/prometheus:v2.19.1
    Image ID:      docker.io/prom/prometheus@sha256:efe62fa8804e9fd2612a945b70c630cc27e21b5fb8233ccc8be4cfbe06d26b04
    Port:          9090/TCP
    Host Port:     0/TCP
    Args:
      --web.console.templates=/etc/prometheus/consoles
      --web.console.libraries=/etc/prometheus/console_libraries
      --config.file=/etc/prometheus/config_out/prometheus.env.yaml
      --storage.tsdb.path=/prometheus
      --storage.tsdb.retention.time=15d
      --web.enable-lifecycle
      --storage.tsdb.no-lockfile
      --web.external-url=http://prometheus.192.168.2.104.nip.io
      --web.route-prefix=/
    State:       Waiting
      Reason:    CrashLoopBackOff
    Last State:  Terminated
      Reason:    Error
      Message:   rrentHeadChunk(0x8fb2840, 0x51de1b0)
                 /app/tsdb/head.go:1991 +0x22c
github.com/prometheus/prometheus/tsdb.(*memSeries).cutNewHeadChunk(0x8fb2840, 0xe1104a28, 0x187, 0x51de1b0, 0x1)
  /app/tsdb/head.go:1962 +0x24
github.com/prometheus/prometheus/tsdb.(*memSeries).append(0x8fb2840, 0xe1104a28, 0x187, 0x0, 0x0, 0x0, 0x0, 0x51de1b0, 0x1)
  /app/tsdb/head.go:2118 +0x3a4
github.com/prometheus/prometheus/tsdb.(*Head).processWALSamples(0x5d00000, 0xdfc45a00, 0x187, 0xdcca880, 0xdcca840, 0x0, 0x0)
  /app/tsdb/head.go:365 +0x284
github.com/prometheus/prometheus/tsdb.(*Head).loadWAL.func5(0x5d00000, 0x14a86038, 0x14a86040, 0xdcca880, 0xdcca840)
  /app/tsdb/head.go:459 +0x3c
created by github.com/prometheus/prometheus/tsdb.(*Head).loadWAL
  /app/tsdb/head.go:458 +0x268
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0xc pc=0x1532d88]

goroutine 297 [running]:
bufio.(*Writer).Available(...)
  /usr/local/go/src/bufio/bufio.go:608
github.com/prometheus/prometheus/tsdb/chunks.(*ChunkDiskMapper).WriteChunk(0x51de1b0, 0x71c0, 0x0, 0xe0a02338, 0x187, 0xe10d8b08, 0x187, 0x240cee0, 0x5e5ce20, 0x0, ...)
  /app/tsdb/chunks/head_chunks.go:252 +0x500
github.com/prometheus/prometheus/tsdb.(*memSeries).mmapCurrentHeadChunk(0x8fb1970, 0x51de1b0)
  /app/tsdb/head.go:1988 +0x6c
github.com/prometheus/prometheus/tsdb.(*memSeries).cutNewHeadChunk(0x8fb1970, 0xe1104a28, 0x187, 0x51de1b0, 0x1)
  /app/tsdb/head.go:1962 +0x24
github.com/prometheus/prometheus/tsdb.(*memSeries).append(0x8fb1970, 0xe1104a28, 0x187, 0x8cab4ba2, 0x3fded782, 0x0, 0x0, 0x51de1b0, 0x1)
  /app/tsdb/head.go:2118 +0x3a4
github.com/prometheus/prometheus/tsdb.(*Head).processWALSamples(0x5d00000, 0xdfc45a00, 0x187, 0xdcca780, 0xdcca740, 0x0, 0x0)
  /app/tsdb/head.go:365 +0x284
github.com/prometheus/prometheus/tsdb.(*Head).loadWAL.func5(0x5d00000, 0x14a86038, 0x14a86040, 0xdcca780, 0xdcca740)
  /app/tsdb/head.go:459 +0x3c
created by github.com/prometheus/prometheus/tsdb.(*Head).loadWAL
  /app/tsdb/head.go:458 +0x268

      Exit Code:    2
      Started:      Sat, 06 May 2023 20:18:05 -0400
      Finished:     Sat, 06 May 2023 20:18:37 -0400
    Ready:          False
    Restart Count:  958
    Requests:
      memory:     400Mi
    Liveness:     http-get http://:web/-/healthy delay=0s timeout=3s period=5s #success=1 #failure=6
    Readiness:    http-get http://:web/-/ready delay=0s timeout=3s period=5s #success=1 #failure=120
    Environment:  <none>
    Mounts:
      /etc/prometheus/certs from tls-assets (ro)
      /etc/prometheus/config_out from config-out (ro)
      /etc/prometheus/rules/prometheus-k8s-rulefiles-0 from prometheus-k8s-rulefiles-0 (rw)
      /prometheus from prometheus-k8s-db (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zhmwj (ro)
  prometheus-config-reloader:
    Container ID:  containerd://7e43f5fa9323d87cf0d3597bbda806087f609c869d9e27e25b1ea68531852916
    Image:         carlosedp/prometheus-config-reloader:v0.40.0
    Image ID:      docker.io/carlosedp/prometheus-config-reloader@sha256:218f9f49a51a072af66ac67696c092a4962fd5108cd5525dbbcea5c239fe3862
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/prometheus-config-reloader
    Args:
      --log-format=logfmt
      --reload-url=http://localhost:9090/-/reload
      --config-file=/etc/prometheus/config/prometheus.yaml.gz
      --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml
    State:          Running
      Started:      Thu, 27 Apr 2023 03:18:00 -0400
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     100m
      memory:  25Mi
    Requests:
      cpu:     100m
      memory:  25Mi
    Environment:
      POD_NAME:  prometheus-k8s-0 (v1:metadata.name)
    Mounts:
      /etc/prometheus/config from config (rw)
      /etc/prometheus/config_out from config-out (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zhmwj (ro)
  rules-configmap-reloader:
    Container ID:  containerd://0f378a4b49b78d9aad618dba27139e4132d379ebe5aa42e6ca788b5aa8d96706
    Image:         carlosedp/configmap-reload:latest
    Image ID:      docker.io/carlosedp/configmap-reload@sha256:cd9f05743ab6024e445ea6e0da4416122eae5e1d0149dd33232be0601096c8d4
    Port:          <none>
    Host Port:     <none>
    Args:
      --webhook-url=http://localhost:9090/-/reload
      --volume-dir=/etc/prometheus/rules/prometheus-k8s-rulefiles-0
    State:          Running
      Started:      Thu, 27 Apr 2023 03:18:01 -0400
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     100m
      memory:  25Mi
    Requests:
      cpu:        100m
      memory:     25Mi
    Environment:  <none>
    Mounts:
      /etc/prometheus/rules/prometheus-k8s-rulefiles-0 from prometheus-k8s-rulefiles-0 (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zhmwj (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  config:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  prometheus-k8s
    Optional:    false
  tls-assets:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  prometheus-k8s-tls-assets
    Optional:    false
  config-out:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  prometheus-k8s-rulefiles-0:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      prometheus-k8s-rulefiles-0
    Optional:  false
  prometheus-k8s-db:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  kube-api-access-zhmwj:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                      From     Message
  ----     ------     ----                     ----     -------
  Warning  Unhealthy  32m (x7497 over 9d)      kubelet  Readiness probe failed: HTTP probe failed with statuscode: 503
  Warning  BackOff    118s (x22276 over 4d8h)  kubelet  Back-off restarting failed container prometheus in pod prometheus-k8s-0_monitoring(2994d714-16dc-4b85-92b4-2232b1a9d8c6)

It's possible since the retention policy was set to 15 days that it overloaded some internal memory buffer. When I checked the disk space on the nodes they looked pretty clean:

christiansargusingh  20:23:45 dronectl/windfarm/ansible > ansible -a "df -h" all  
192.168.2.111 | CHANGED | rc=0 >>
Filesystem      Size  Used Avail Use% Mounted on
/dev/root        28G   13G   14G  49% /
devtmpfs        3.9G     0  3.9G   0% /dev
tmpfs           4.0G     0  4.0G   0% /dev/shm
tmpfs           4.0G  128M  3.8G   4% /run
tmpfs           5.0M  4.0K  5.0M   1% /run/lock
tmpfs           4.0G     0  4.0G   0% /sys/fs/cgroup
/dev/mmcblk0p1   64M   52M   13M  81% /boot
shm              64M     0   64M   0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/e3800ba16fec9f15f0dc7ebfd5af93425f76ed9d6f1c8378131591c30fecff0c/shm
shm              64M     0   64M   0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/75a1a44116df01ceb6c3a80b13ac1a8f49aa7b4533c671d64f74199a5d48f04c/shm
shm              64M     0   64M   0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/37876f6ff40d563b060a36f2a70bb631dd140e6a31065cd0a00978cd0555334c/shm
shm              64M     0   64M   0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/07f08d3cf36e585782bce15927cf98dd9807589d90c6f94f78217df34b5d44e4/shm
shm              64M     0   64M   0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/d070f5b230fb1b3d2595ee81023884901663571e918fec2980a55c88418d01ea/shm
shm              64M     0   64M   0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/63e007f07d78406b91a66b0d1d64bbe10067168ce355865905bb1f9c9a06e885/shm
shm              64M     0   64M   0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/aee6f7109335a546e43177837e748b09244108339ac0757358e5ef73a75a84d5/shm
shm              64M     0   64M   0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/8a74411e7b7931450914ed3d6685187f9edde2ecb46499cbf64d1579d054e275/shm
shm              64M     0   64M   0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/7d9defdd140275bebba7ca147c745fc8af159d86624097b36251c288c3c73234/shm
shm              64M     0   64M   0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/85c704598591e26b526e93045be1175cfb1e2fb722d0a0f57bdc79a049021dae/shm
tmpfs           802M     0  802M   0% /run/user/1000
shm              64M     0   64M   0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/e4708f79505a42cc3938f36ca58faa12d1652785110b548f732e6b007a4baae7/shm
shm              64M     0   64M   0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/a3ff96c1c22e4c4de5af8e51949f26fa500661b9082fa51f2f6e7e6a86415b44/shm
192.168.2.101 | CHANGED | rc=0 >>
Filesystem      Size  Used Avail Use% Mounted on
/dev/root        28G  1.5G   25G   6% /
devtmpfs        484M     0  484M   0% /dev
tmpfs           488M     0  488M   0% /dev/shm
tmpfs           488M   14M  475M   3% /run
tmpfs           5.0M  4.0K  5.0M   1% /run/lock
tmpfs           488M     0  488M   0% /sys/fs/cgroup
/dev/mmcblk0p1   64M   52M   13M  81% /boot
shm              64M     0   64M   0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/6bc4c460fddffd4c024278d57d0f4713dbff8e09bc3d111ba73d24b35dec7cd9/shm
shm              64M     0   64M   0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/52323c0d28a72965e4fa6a5473d1534ab82973683b2a94fb9d346d8da69317fa/shm
shm              64M     0   64M   0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/0fbb0dc06021abe60c36296db9f7930c51e6d0625372c031c51b944b0f984466/shm
tmpfs            98M     0   98M   0% /run/user/1000
192.168.2.100 | CHANGED | rc=0 >>
Filesystem      Size  Used Avail Use% Mounted on
/dev/root        28G  1.6G   25G   7% /
devtmpfs        484M     0  484M   0% /dev
tmpfs           488M     0  488M   0% /dev/shm
tmpfs           488M   14M  475M   3% /run
tmpfs           5.0M  4.0K  5.0M   1% /run/lock
tmpfs           488M     0  488M   0% /sys/fs/cgroup
/dev/mmcblk0p1   64M   52M   13M  81% /boot
shm              64M     0   64M   0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/ffb3428e1de1109b76fa40ab04179b84f94aa12ff619d6756abd5f4151248d0d/shm
shm              64M     0   64M   0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/99f0948bf77ee38a2ef610fbd832d1861d3172a746456a7dd73cd0b7a1c25fac/shm
shm              64M     0   64M   0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/988727990842e1c5558ba7d9dd3ab0dc4b216bbef898f7cf909168875f4784ec/shm
shm              64M     0   64M   0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/d95a3563f43bfcd3b0773d8234ac437380b0ab6eaba9b379f51ecd5326380ce1/shm
tmpfs            98M     0   98M   0% /run/user/1000
192.168.2.102 | CHANGED | rc=0 >>
Filesystem      Size  Used Avail Use% Mounted on
/dev/root        28G  1.5G   25G   6% /
devtmpfs        484M     0  484M   0% /dev
tmpfs           488M     0  488M   0% /dev/shm
tmpfs           488M   20M  469M   4% /run
tmpfs           5.0M  4.0K  5.0M   1% /run/lock
tmpfs           488M     0  488M   0% /sys/fs/cgroup
/dev/mmcblk0p1   64M   52M   13M  81% /boot
shm              64M     0   64M   0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/349f93c463d8b88497b3249fd24875150bb00c1084881ce9d12d627616eaced5/shm
shm              64M     0   64M   0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/cbf63e45b2aa0d78c558c645f672c416e1fad20b9ce9354de48f5408067e54ee/shm
shm              64M     0   64M   0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/0ce8df185f1d65939d3c34a071dbc85fff93177b1660f19a622dabbccbf332ee/shm
tmpfs            98M     0   98M   0% /run/user/1000
192.168.2.103 | CHANGED | rc=0 >>
Filesystem      Size  Used Avail Use% Mounted on
/dev/root        28G  1.5G   25G   6% /
devtmpfs        484M     0  484M   0% /dev
tmpfs           488M     0  488M   0% /dev/shm
tmpfs           488M   14M  475M   3% /run
tmpfs           5.0M  4.0K  5.0M   1% /run/lock
tmpfs           488M     0  488M   0% /sys/fs/cgroup
/dev/mmcblk0p1   64M   52M   13M  81% /boot
shm              64M     0   64M   0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/848d6527d03b5370231f95fece7dfbcd2adaa4d6cff981eb82173f4886ad4983/shm
shm              64M     0   64M   0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/771e9f1cbe92b269bf1415a82ce297710212c0864751bf6a0d47367c51702146/shm
shm              64M     0   64M   0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/97d57a31f88e1d8e654123500c5d0c5ea4af2b2fb00a66ae7d36dd186f91240c/shm
tmpfs            98M     0   98M   0% /run/user/1000
192.168.2.104 | CHANGED | rc=0 >>
Filesystem      Size  Used Avail Use% Mounted on
/dev/root        28G  1.5G   25G   6% /
devtmpfs        484M     0  484M   0% /dev
tmpfs           488M     0  488M   0% /dev/shm
tmpfs           488M   14M  475M   3% /run
tmpfs           5.0M  4.0K  5.0M   1% /run/lock
tmpfs           488M     0  488M   0% /sys/fs/cgroup
/dev/mmcblk0p1   64M   52M   13M  81% /boot
shm              64M     0   64M   0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/5030e50d36a13cdab0c6e90c44db353d01ff8b95e42423cb0383bf33d6711286/shm
shm              64M     0   64M   0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/2406466746c736fb1f34eb3545c6485dd42f98e8768bcd9287245054bef28ff2/shm
shm              64M     0   64M   0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/759f4327f0402521a89977a914ee20ce8aa21b125f673ad83e7a9c1010d7c42e/shm
tmpfs            98M     0   98M   0% /run/user/1000
192.168.2.105 | CHANGED | rc=0 >>
Filesystem      Size  Used Avail Use% Mounted on
/dev/root        28G  1.5G   25G   6% /
devtmpfs        484M     0  484M   0% /dev
tmpfs           488M     0  488M   0% /dev/shm
tmpfs           488M   14M  475M   3% /run
tmpfs           5.0M  4.0K  5.0M   1% /run/lock
tmpfs           488M     0  488M   0% /sys/fs/cgroup
/dev/mmcblk0p1   64M   52M   13M  81% /boot
shm              64M     0   64M   0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/bf82df28c64e0bb03b016847ace5b20e3b74a19900fbab397cd03236ff9960b9/shm
shm              64M     0   64M   0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/0c54d98c6f6c59abf81dc1c56c447812cede9ccbd0a981475cd1771a60e7c135/shm
shm              64M     0   64M   0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/f2745a79493e80090e0f1ebbdc3f0654515358dfa72588ad1e41c3ae7c3e2325/shm
tmpfs            98M     0   98M   0% /run/user/1000
192.168.2.106 | CHANGED | rc=0 >>
Filesystem      Size  Used Avail Use% Mounted on
/dev/root        28G  1.5G   25G   6% /
devtmpfs        484M     0  484M   0% /dev
tmpfs           488M     0  488M   0% /dev/shm
tmpfs           488M   14M  475M   3% /run
tmpfs           5.0M  4.0K  5.0M   1% /run/lock
tmpfs           488M     0  488M   0% /sys/fs/cgroup
/dev/mmcblk0p1   64M   52M   13M  81% /boot
shm              64M     0   64M   0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/5698f39717231eb6e467c45f2cbdd2b45cf97d6af47f059ae297d9c69e24ae4b/shm
shm              64M     0   64M   0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/e5f96f6f93ec55075185fd3f2f114085b2a671b4a8a2e10e7ba689f9a2745c9b/shm
shm              64M     0   64M   0% /run/k3s/containerd/io.containerd.grpc.v1.cri/sandboxes/4e5a0d10982a983d6d245f1a1b10afc594844af3bf8eb8b79e81b2e2f9eea1ad/shm
tmpfs            98M     0   98M   0% /run/user/1000
@ztnel ztnel added defect Something isn't working triage Pending categorization and removed triage Pending categorization labels May 7, 2023
@ztnel ztnel assigned ztnel and unassigned ztnel May 7, 2023
@ztnel
Copy link
Contributor Author

ztnel commented May 7, 2023

Deleting the pod and allowing Kubernetes to reschedule seemed to do the trick for now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
defect Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant