You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
and these cgroups appear to correspond to a behaviour of systemd whenever a mount is created:
# systemctl status run-r6fc4ce2068d9421fbc4334d9c799b19c.scope
● run-r6fc4ce2068d9421fbc4334d9c799b19c.scope - Kubernetes transient mount for /mnt/containers/kubernetes/pods/44fdf928-a871-11e8-817c-12ffd34f0094/volumes/kubernetes.io~secret/default-token-s7sk7
Loaded: loaded
Transient: yes
Drop-In: /run/systemd/system/run-r6fc4ce2068d9421fbc4334d9c799b19c.scope.d
└─50-Description.conf
Active: active (running) since Sat 2018-08-25 14:15:08 UTC; 1 weeks 6 days ago
Unless I misunderstand, there is never going to be any load average, or run-queue, or memory failures, etc., for these cgroups.
I see 45 different metrics reported per unit, so many thousands across our cluster.
The text was updated successfully, but these errors were encountered:
Scope units are [...] created programmatically using the bus interfaces of systemd. [...]
The main purpose of scope units is grouping worker processes of a system service for organization and for managing resources.
I might speculate that these units are created internally by systemd to run processing relating to each mount. Negating my "never going to be any load ..."
Maybe a different approach would be to limit the depth to which certain cgroups are monitored? E.g. I could declare to cAdvisor I'm happy to monitor /system.slice as a whole, and don't need to see detail under it?
Ignoring these would be really useful for us. We have thousands of k8s pods with a few mounts each, and these system.slice/run-*.scope metrics account for the vast majority of the metrics reported by cadvisor.
Having a way to ignore them at the source would be awesome.
(using cAdvisor inside kubelet 1.11.2)
I see a bunch of metrics reported like this:
and these cgroups appear to correspond to a behaviour of systemd whenever a mount is created:
Unless I misunderstand, there is never going to be any load average, or run-queue, or memory failures, etc., for these cgroups.
I see 45 different metrics reported per unit, so many thousands across our cluster.
The text was updated successfully, but these errors were encountered: