Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Show current runtime configuration during startup #7704

Open
kwilczynski opened this issue Jan 22, 2024 · 14 comments · May be fixed by #7783
Open

Show current runtime configuration during startup #7704

kwilczynski opened this issue Jan 22, 2024 · 14 comments · May be fixed by #7783
Assignees
Labels
good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines.

Comments

@kwilczynski
Copy link
Member

Currently, CRI-O shows only a handful of information about its runtime configuration during startup, for example:

(running with default INFO log level)

INFO[2024-01-22 09:22:19.340284678Z] Starting CRI-O, version: 1.29.1, git: 78e179ba8dd3ce462382a17049e8d1f770246af1(clean)
INFO[2024-01-22 09:22:19.343411355Z] Node configuration value for hugetlb cgroup is true
INFO[2024-01-22 09:22:19.343426860Z] Node configuration value for pid cgroup is true
INFO[2024-01-22 09:22:19.343462665Z] Node configuration value for memoryswap cgroup is true
INFO[2024-01-22 09:22:19.343475351Z] Node configuration value for cgroup v2 is true
INFO[2024-01-22 09:22:19.346534244Z] Node configuration value for systemd AllowedCPUs is true
INFO[2024-01-22 09:22:19.346887437Z] [graphdriver] using prior storage driver: overlay
INFO[2024-01-22 09:22:19.347576764Z] Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL
WARN[2024-01-22 09:22:19.347641234Z] 'runc is being ignored due to: "\"runc\" not found in $PATH: exec: \"runc\": executable file not found in $PATH"
INFO[2024-01-22 09:22:19.355170897Z] Checkpoint/restore support disabled
INFO[2024-01-22 09:22:19.355199829Z] Using seccomp default profile when unspecified: true
INFO[2024-01-22 09:22:19.355208657Z] Using the internal default seccomp profile
INFO[2024-01-22 09:22:19.355215139Z] Installing default AppArmor profile: crio-default
INFO[2024-01-22 09:22:19.375117392Z] No blockio config file specified, blockio not configured
INFO[2024-01-22 09:22:19.375171923Z] RDT not available in the host system
INFO[2024-01-22 09:22:19.375948272Z] Conmon does support the --sync option
INFO[2024-01-22 09:22:19.375987833Z] Conmon does support the --log-global-size-max option
INFO[2024-01-22 09:22:19.393795538Z] Found CNI network k8s-pod-network (type=calico) at /etc/cni/net.d/10-calico.conflist
INFO[2024-01-22 09:22:19.394815197Z] Found CNI network crio (type=bridge) at /etc/cni/net.d/11-crio-ipv4-bridge.conflist
INFO[2024-01-22 09:22:19.394830260Z] Updated default CNI network name to k8s-pod-network
INFO[2024-01-22 09:22:19.395084495Z] Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus
INFO[2024-01-22 09:22:19.395127327Z] Restore irqbalance config: failed to get current CPU ban list, ignoring
INFO[2024-01-22 09:22:19.423534568Z] Got pod network &{Name:dashboard-metrics-scraper-5657497c4c-jdnrv Namespace:kubernetes-dashboard ID:ebc5b52f310931b2d5e9aaa2e39a122b924a803f96d95e72420508353cd48e82 UID:2f05d7ef-2bc7-4f9a-ad13-f2c7509732b0 NetNS:/var/run/netns/8d9001e0-7f71-46c9-af07-6056d48c742a Networks:[] RuntimeConfig:map[k8s-pod-network:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath:kubepods-besteffort-pod2f05d7ef_2bc7_4f9a_ad13_f2c7509732b0.slice PodAnnotations:0xc000386ce0}] Aliases:map[]}
INFO[2024-01-22 09:22:19.423995001Z] Checking pod kubernetes-dashboard_dashboard-metrics-scraper-5657497c4c-jdnrv for CNI network k8s-pod-network (type=calico)
INFO[2024-01-22 09:22:19.424372538Z] Got pod network &{Name:kubernetes-dashboard-78f87ddfc-k9ll4 Namespace:kubernetes-dashboard ID:5b176d7776e68856adc88c494409479d4b0d4d759787489ecc91f355b0d2f427 UID:ea501514-fe2d-4327-b37a-64c29b64157e NetNS:/var/run/netns/c8af1ec5-5c8a-459c-b5e6-c0f3d7e452d1 Networks:[] RuntimeConfig:map[k8s-pod-network:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath:kubepods-besteffort-podea501514_fe2d_4327_b37a_64c29b64157e.slice PodAnnotations:0xc000386ea8}] Aliases:map[]}
INFO[2024-01-22 09:22:19.424630333Z] Checking pod kubernetes-dashboard_kubernetes-dashboard-78f87ddfc-k9ll4 for CNI network k8s-pod-network (type=calico)
INFO[2024-01-22 09:22:19.424956703Z] Registered SIGHUP reload watcher
INFO[2024-01-22 09:22:19.424972744Z] Starting seccomp notifier watcher
INFO[2024-01-22 09:22:19.424999494Z] Create NRI interface
INFO[2024-01-22 09:22:19.425006943Z] NRI interface is disabled in the configuration.

Not much more will be added when running CRI-O with an increased log level, per:

(running with default DEBUG log level; Request and Response log lines have been omitted for brevity)

INFO[2024-01-22 09:22:53.860120634Z] Starting CRI-O, version: 1.29.1, git: 78e179ba8dd3ce462382a17049e8d1f770246af1(clean)
INFO[2024-01-22 09:22:53.864914462Z] Node configuration value for hugetlb cgroup is true
INFO[2024-01-22 09:22:53.864931140Z] Node configuration value for pid cgroup is true
INFO[2024-01-22 09:22:53.864961708Z] Node configuration value for memoryswap cgroup is true
INFO[2024-01-22 09:22:53.864972573Z] Node configuration value for cgroup v2 is true
INFO[2024-01-22 09:22:53.869490259Z] Node configuration value for systemd AllowedCPUs is true
DEBU[2024-01-22 09:22:53.869578815Z] Cached value indicated that overlay is supported  file="overlay/overlay.go:247"
DEBU[2024-01-22 09:22:53.869647246Z] Cached value indicated that overlay is supported  file="overlay/overlay.go:247"
DEBU[2024-01-22 09:22:53.869770073Z] Cached value indicated that metacopy is not being used  file="overlay/overlay.go:408"
DEBU[2024-01-22 09:22:53.869882426Z] Cached value indicated that native-diff is usable  file="overlay/overlay.go:798"
DEBU[2024-01-22 09:22:53.869932214Z] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false  file="overlay/overlay.go:467"
INFO[2024-01-22 09:22:53.870010635Z] [graphdriver] using prior storage driver: overlay  file="drivers/driver.go:406"
INFO[2024-01-22 09:22:53.870579830Z] Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL  file="capabilities/capabilities_linux.go:38"
WARN[2024-01-22 09:22:53.870660299Z] 'runc is being ignored due to: "\"runc\" not found in $PATH: exec: \"runc\": executable file not found in $PATH"  file="config/config.go:1239"
DEBU[2024-01-22 09:22:53.870704612Z] Using runtime executable from $PATH "/usr/bin/crun"  file="config/config.go:1519"
DEBU[2024-01-22 09:22:53.870760573Z] Found valid runtime "crun" for runtime_path "/usr/bin/crun"  file="config/config.go:1531"
DEBU[2024-01-22 09:22:53.870852915Z] Allowed annotations for runtime: [io.containers.trace-syscall]  file="config/config.go:1566"
DEBU[2024-01-22 09:22:53.878066292Z] Loading registries configuration "/etc/containers/registries.conf"  file="sysregistriesv2/system_registries_v2.go:926"
DEBU[2024-01-22 09:22:53.878144466Z] Loading registries configuration "/etc/containers/registries.conf.d/crio.conf"  file="sysregistriesv2/system_registries_v2.go:926"
DEBU[2024-01-22 09:22:53.878250070Z] Using hooks directory: /usr/share/containers/oci/hooks.d  file="config/config.go:1125"
DEBU[2024-01-22 09:22:53.878317184Z] Using pinns from $PATH: /usr/bin/pinns        file="config/config.go:1398"
INFO[2024-01-22 09:22:53.878351425Z] Checkpoint/restore support disabled           file="config/config.go:1150"
INFO[2024-01-22 09:22:53.878379287Z] Using seccomp default profile when unspecified: true  file="seccomp/seccomp.go:99"
INFO[2024-01-22 09:22:53.878400770Z] Using the internal default seccomp profile    file="seccomp/seccomp.go:152"
INFO[2024-01-22 09:22:53.878449168Z] Installing default AppArmor profile: crio-default  file="apparmor/apparmor_linux.go:46"
DEBU[2024-01-22 09:22:53.878473563Z] Using /sbin/apparmor_parser binary            file="supported/supported.go:61"
INFO[2024-01-22 09:22:53.897545604Z] No blockio config file specified, blockio not configured  file="blockio/blockio.go:74"
INFO[2024-01-22 09:22:53.897660437Z] RDT not available in the host system          file="rdt/rdt.go:56"
DEBU[2024-01-22 09:22:53.897730709Z] Using conmon from $PATH: /usr/bin/conmon      file="config/config.go:1398"
INFO[2024-01-22 09:22:53.898299452Z] Conmon does support the --sync option         file="conmonmgr/conmonmgr.go:85"
INFO[2024-01-22 09:22:53.898364384Z] Conmon does support the --log-global-size-max option  file="conmonmgr/conmonmgr.go:71"
INFO[2024-01-22 09:22:53.914271867Z] Found CNI network k8s-pod-network (type=calico) at /etc/cni/net.d/10-calico.conflist  file="ocicni/ocicni.go:343"
INFO[2024-01-22 09:22:53.915329501Z] Found CNI network crio (type=bridge) at /etc/cni/net.d/11-crio-ipv4-bridge.conflist  file="ocicni/ocicni.go:343"
INFO[2024-01-22 09:22:53.915371623Z] Updated default CNI network name to k8s-pod-network  file="ocicni/ocicni.go:375"
DEBU[2024-01-22 09:22:53.915690987Z] reading hooks from /usr/share/containers/oci/hooks.d  file="hooks/read.go:65"
INFO[2024-01-22 09:22:53.915748800Z] Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus  file="server/server.go:432"
INFO[2024-01-22 09:22:53.915791488Z] Restore irqbalance config: failed to get current CPU ban list, ignoring  file="runtimehandlerhooks/high_performance_hooks_linux.go:882"
DEBU[2024-01-22 09:22:53.916897541Z] Golang's threads limit set to 56250           file="server/server.go:382"
DEBU[2024-01-22 09:22:53.927997944Z] Skipping status update for: &{State:{Version:1.0.0 ID:e10995d74a206546d1862951d80e9ab0c7ca04f5315d6307cd69d15b75e89b7d Status:stopped Pid:0 Bundle:/run/containers/storage/overlay-containers/e10995d74a206546d1862951d80e9ab0c7ca04f5315d6307cd69d15b75e89b7d/userdata Annotations:map[io.container.manager:cri-o io.kubernetes.container.hash:d9783aa0 io.kubernetes.container.name:install-cni io.kubernetes.container.restartCount:0 io.kubernetes.container.terminationMessagePath:/dev/termination-log io.kubernetes.container.terminationMessagePolicy:File io.kubernetes.cri-o.Annotations:{"io.kubernetes.container.hash":"d9783aa0","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.pod.terminationGracePeriod":"0"} io.kubernetes.cri-o.ContainerID:e10995d74a206546d1862951d80e9ab0c7ca04f5315d6307cd69d15b75e89b7d io.kubernetes.cri-o.ContainerType:container io.kubernetes.cri-o.Created:2024-01-22T09:20:53.402171277Z io.kubernetes.cri-o.Image:d70a5947d57e5ab3340d126a38e6ae51bd9e8e0b342daa2012e78d8868bed5b7 io.kubernetes.cri-o.ImageName:quay.io/calico/cni:v3.25.0 io.kubernetes.cri-o.ImageRef:d70a5947d57e5ab3340d126a38e6ae51bd9e8e0b342daa2012e78d8868bed5b7 io.kubernetes.cri-o.Labels:{"io.kubernetes.container.name":"install-cni","io.kubernetes.pod.name":"calico-node-gjg4n","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"3f4c1c06-be83-4f39-82fc-61404c1cfc40"} io.kubernetes.cri-o.LogPath:/var/log/pods/kube-system_calico-node-gjg4n_3f4c1c06-be83-4f39-82fc-61404c1cfc40/install-cni/0.log io.kubernetes.cri-o.Metadata:{"name":"install-cni"} io.kubernetes.cri-o.MountPoint:/var/lib/containers/storage/overlay/4e69bf310ceca1ce172f889181c35b636d67e283bba9fb01aad680a8ab279f78/merged io.kubernetes.cri-o.Name:k8s_install-cni_calico-node-gjg4n_kube-system_3f4c1c06-be83-4f39-82fc-61404c1cfc40_0 io.kubernetes.cri-o.PlatformRuntimePath: io.kubernetes.cri-o.ResolvPath:/run/containers/storage/overlay-containers/3603aa68bec5d2838100d27360b777481550b07a78fdd422ba5c9ce9dfb16f02/userdata/resolv.conf io.kubernetes.cri-o.SandboxID:3603aa68bec5d2838100d27360b777481550b07a78fdd422ba5c9ce9dfb16f02 io.kubernetes.cri-o.SandboxName:k8s_calico-node-gjg4n_kube-system_3f4c1c06-be83-4f39-82fc-61404c1cfc40_0 io.kubernetes.cri-o.SeccompProfilePath:Unconfined io.kubernetes.cri-o.Stdin:false io.kubernetes.cri-o.StdinOnce:false io.kubernetes.cri-o.TTY:false io.kubernetes.cri-o.Volumes:[{"container_path":"/etc/hosts","host_path":"/var/lib/kubelet/pods/3f4c1c06-be83-4f39-82fc-61404c1cfc40/etc-hosts","readonly":false,"propagation":0,"selinux_relabel":false},{"container_path":"/dev/termination-log","host_path":"/var/lib/kubelet/pods/3f4c1c06-be83-4f39-82fc-61404c1cfc40/containers/install-cni/78e67dde","readonly":false,"propagation":0,"selinux_relabel":false},{"container_path":"/host/opt/cni/bin","host_path":"/opt/cni/bin","readonly":false,"propagation":0,"selinux_relabel":false},{"container_path":"/host/etc/cni/net.d","host_path":"/etc/cni/net.d","readonly":false,"propagation":0,"selinux_relabel":false},{"container_path":"/var/run/secrets/kubernetes.io/serviceaccount","host_path":"/var/lib/kubelet/pods/3f4c1c06-be83-4f39-82fc-61404c1cfc40/volumes/kubernetes.io~projected/kube-api-access-hlxjv","readonly":true,"propagation":0,"selinux_relabel":false}] io.kubernetes.pod.name:calico-node-gjg4n io.kubernetes.pod.namespace:kube-system io.kubernetes.pod.terminationGracePeriod:0 io.kubernetes.pod.uid:3f4c1c06-be83-4f39-82fc-61404c1cfc40 kubernetes.io/config.seen:2024-01-22T09:20:40.288785826Z kubernetes.io/config.source:api org.systemd.property.After:['crio.service'] org.systemd.property.DefaultDependencies:true org.systemd.property.TimeoutStopUSec:uint64 0000000]} Created:2024-01-22 09:20:53.427684 +0000 UTC Started:2024-01-22 09:20:53.43587167 +0000 UTC Finished:2024-01-22 09:20:53.942016938 +0000 UTC ExitCode:0xc00034d9c8 OOMKilled:false SeccompKilled:false Error: InitPid:5741 InitStartTime:7027 CheckpointedAt:0001-01-01 00:00:00 +0000 UTC}  file="oci/runtime_oci.go:946"
DEBU[2024-01-22 09:22:53.940506706Z] Skipping status update for: &{State:{Version:1.0.0 ID:4abeaafb3689fa6c455f22a064da6140b280cad522de5778b9dc05bd07b95956 Status:stopped Pid:0 Bundle:/run/containers/storage/overlay-containers/4abeaafb3689fa6c455f22a064da6140b280cad522de5778b9dc05bd07b95956/userdata Annotations:map[io.container.manager:cri-o io.kubernetes.container.hash:8dd34c3d io.kubernetes.container.name:upgrade-ipam io.kubernetes.container.restartCount:0 io.kubernetes.container.terminationMessagePath:/dev/termination-log io.kubernetes.container.terminationMessagePolicy:File io.kubernetes.cri-o.Annotations:{"io.kubernetes.container.hash":"8dd34c3d","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.pod.terminationGracePeriod":"0"} io.kubernetes.cri-o.ContainerID:4abeaafb3689fa6c455f22a064da6140b280cad522de5778b9dc05bd07b95956 io.kubernetes.cri-o.ContainerType:container io.kubernetes.cri-o.Created:2024-01-22T09:20:52.729393943Z io.kubernetes.cri-o.Image:quay.io/calico/cni@sha256:a38d53cb8688944eafede2f0eadc478b1b403cefeff7953da57fe9cd2d65e977 io.kubernetes.cri-o.ImageName:quay.io/calico/cni:v3.25.0 io.kubernetes.cri-o.ImageRef:d70a5947d57e5ab3340d126a38e6ae51bd9e8e0b342daa2012e78d8868bed5b7 io.kubernetes.cri-o.Labels:{"io.kubernetes.container.name":"upgrade-ipam","io.kubernetes.pod.name":"calico-node-gjg4n","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"3f4c1c06-be83-4f39-82fc-61404c1cfc40"} io.kubernetes.cri-o.LogPath:/var/log/pods/kube-system_calico-node-gjg4n_3f4c1c06-be83-4f39-82fc-61404c1cfc40/upgrade-ipam/0.log io.kubernetes.cri-o.Metadata:{"name":"upgrade-ipam"} io.kubernetes.cri-o.MountPoint:/var/lib/containers/storage/overlay/b14242c1aeb152e548e0c3de7a59080e4c09935ededee28b0c320e2e1bebcd53/merged io.kubernetes.cri-o.Name:k8s_upgrade-ipam_calico-node-gjg4n_kube-system_3f4c1c06-be83-4f39-82fc-61404c1cfc40_0 io.kubernetes.cri-o.PlatformRuntimePath: io.kubernetes.cri-o.ResolvPath:/run/containers/storage/overlay-containers/3603aa68bec5d2838100d27360b777481550b07a78fdd422ba5c9ce9dfb16f02/userdata/resolv.conf io.kubernetes.cri-o.SandboxID:3603aa68bec5d2838100d27360b777481550b07a78fdd422ba5c9ce9dfb16f02 io.kubernetes.cri-o.SandboxName:k8s_calico-node-gjg4n_kube-system_3f4c1c06-be83-4f39-82fc-61404c1cfc40_0 io.kubernetes.cri-o.SeccompProfilePath:Unconfined io.kubernetes.cri-o.Stdin:false io.kubernetes.cri-o.StdinOnce:false io.kubernetes.cri-o.TTY:false io.kubernetes.cri-o.Volumes:[{"container_path":"/etc/hosts","host_path":"/var/lib/kubelet/pods/3f4c1c06-be83-4f39-82fc-61404c1cfc40/etc-hosts","readonly":false,"propagation":0,"selinux_relabel":false},{"container_path":"/dev/termination-log","host_path":"/var/lib/kubelet/pods/3f4c1c06-be83-4f39-82fc-61404c1cfc40/containers/upgrade-ipam/38c117e6","readonly":false,"propagation":0,"selinux_relabel":false},{"container_path":"/var/lib/cni/networks","host_path":"/var/lib/cni/networks","readonly":false,"propagation":0,"selinux_relabel":false},{"container_path":"/host/opt/cni/bin","host_path":"/opt/cni/bin","readonly":false,"propagation":0,"selinux_relabel":false},{"container_path":"/var/run/secrets/kubernetes.io/serviceaccount","host_path":"/var/lib/kubelet/pods/3f4c1c06-be83-4f39-82fc-61404c1cfc40/volumes/kubernetes.io~projected/kube-api-access-hlxjv","readonly":true,"propagation":0,"selinux_relabel":false}] io.kubernetes.pod.name:calico-node-gjg4n io.kubernetes.pod.namespace:kube-system io.kubernetes.pod.terminationGracePeriod:0 io.kubernetes.pod.uid:3f4c1c06-be83-4f39-82fc-61404c1cfc40 kubernetes.io/config.seen:2024-01-22T09:20:40.288785826Z kubernetes.io/config.source:api org.systemd.property.After:['crio.service'] org.systemd.property.DefaultDependencies:true org.systemd.property.TimeoutStopUSec:uint64 0000000]} Created:2024-01-22 09:20:52.758216 +0000 UTC Started:2024-01-22 09:20:52.765534833 +0000 UTC Finished:2024-01-22 09:20:52.782596951 +0000 UTC ExitCode:0xc0006d5900 OOMKilled:false SeccompKilled:false Error: InitPid:5726 InitStartTime:6960 CheckpointedAt:0001-01-01 00:00:00 +0000 UTC}  file="oci/runtime_oci.go:946"
DEBU[2024-01-22 09:22:53.944785107Z] Skipping status update for: &{State:{Version:1.0.0 ID:0d9dbf7eb6ce85257f01a339af132bb9e242be4ddfa5c7b8f6086a173d14a204 Status:stopped Pid:0 Bundle:/run/containers/storage/overlay-containers/0d9dbf7eb6ce85257f01a339af132bb9e242be4ddfa5c7b8f6086a173d14a204/userdata Annotations:map[io.container.manager:cri-o io.kubernetes.container.hash:78ea02c1 io.kubernetes.container.name:mount-bpffs io.kubernetes.container.restartCount:0 io.kubernetes.container.terminationMessagePath:/dev/termination-log io.kubernetes.container.terminationMessagePolicy:File io.kubernetes.cri-o.Annotations:{"io.kubernetes.container.hash":"78ea02c1","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.pod.terminationGracePeriod":"0"} io.kubernetes.cri-o.ContainerID:0d9dbf7eb6ce85257f01a339af132bb9e242be4ddfa5c7b8f6086a173d14a204 io.kubernetes.cri-o.ContainerType:container io.kubernetes.cri-o.Created:2024-01-22T09:21:10.67467138Z io.kubernetes.cri-o.Image:quay.io/calico/node@sha256:56db28c3632192f56a1ff1360b83ef640fc8f41fa21a83126194811713e2f022 io.kubernetes.cri-o.ImageName:quay.io/calico/node:v3.25.0 io.kubernetes.cri-o.ImageRef:08616d26b8e74867402274687491e5978ba4a6ded94e9f5ecc3e364024e5683e io.kubernetes.cri-o.Labels:{"io.kubernetes.container.name":"mount-bpffs","io.kubernetes.pod.name":"calico-node-gjg4n","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.uid":"3f4c1c06-be83-4f39-82fc-61404c1cfc40"} io.kubernetes.cri-o.LogPath:/var/log/pods/kube-system_calico-node-gjg4n_3f4c1c06-be83-4f39-82fc-61404c1cfc40/mount-bpffs/0.log io.kubernetes.cri-o.Metadata:{"name":"mount-bpffs"} io.kubernetes.cri-o.MountPoint:/var/lib/containers/storage/overlay/7a2725ed4e5cb197d109a811db3a301d16bae8b53497662464cc837a334b0c9d/merged io.kubernetes.cri-o.Name:k8s_mount-bpffs_calico-node-gjg4n_kube-system_3f4c1c06-be83-4f39-82fc-61404c1cfc40_0 io.kubernetes.cri-o.PlatformRuntimePath: io.kubernetes.cri-o.ResolvPath:/run/containers/storage/overlay-containers/3603aa68bec5d2838100d27360b777481550b07a78fdd422ba5c9ce9dfb16f02/userdata/resolv.conf io.kubernetes.cri-o.SandboxID:3603aa68bec5d2838100d27360b777481550b07a78fdd422ba5c9ce9dfb16f02 io.kubernetes.cri-o.SandboxName:k8s_calico-node-gjg4n_kube-system_3f4c1c06-be83-4f39-82fc-61404c1cfc40_0 io.kubernetes.cri-o.SeccompProfilePath:Unconfined io.kubernetes.cri-o.Stdin:false io.kubernetes.cri-o.StdinOnce:false io.kubernetes.cri-o.TTY:false io.kubernetes.cri-o.Volumes:[{"container_path":"/nodeproc","host_path":"/proc","readonly":true,"propagation":0,"selinux_relabel":false},{"container_path":"/sys/fs","host_path":"/sys/fs","readonly":false,"propagation":2,"selinux_relabel":false},{"container_path":"/etc/hosts","host_path":"/var/lib/kubelet/pods/3f4c1c06-be83-4f39-82fc-61404c1cfc40/etc-hosts","readonly":false,"propagation":0,"selinux_relabel":false},{"container_path":"/dev/termination-log","host_path":"/var/lib/kubelet/pods/3f4c1c06-be83-4f39-82fc-61404c1cfc40/containers/mount-bpffs/033a32db","readonly":false,"propagation":0,"selinux_relabel":false},{"container_path":"/var/run/calico","host_path":"/var/run/calico","readonly":false,"propagation":2,"selinux_relabel":false},{"container_path":"/var/run/secrets/kubernetes.io/serviceaccount","host_path":"/var/lib/kubelet/pods/3f4c1c06-be83-4f39-82fc-61404c1cfc40/volumes/kubernetes.io~projected/kube-api-access-hlxjv","readonly":true,"propagation":0,"selinux_relabel":false}] io.kubernetes.pod.name:calico-node-gjg4n io.kubernetes.pod.namespace:kube-system io.kubernetes.pod.terminationGracePeriod:0 io.kubernetes.pod.uid:3f4c1c06-be83-4f39-82fc-61404c1cfc40 kubernetes.io/config.seen:2024-01-22T09:20:40.288785826Z kubernetes.io/config.source:api org.systemd.property.After:['crio.service'] org.systemd.property.DefaultDependencies:true org.systemd.property.TimeoutStopUSec:uint64 0000000]} Created:2024-01-22 09:21:10.702328 +0000 UTC Started:2024-01-22 09:21:10.720238679 +0000 UTC Finished:2024-01-22 09:21:10.765600745 +0000 UTC ExitCode:0xc00062b548 OOMKilled:false SeccompKilled:false Error: InitPid:6334 InitStartTime:8755 CheckpointedAt:0001-01-01 00:00:00 +0000 UTC}  file="oci/runtime_oci.go:946"
INFO[2024-01-22 09:22:53.947520038Z] Got pod network &{Name:dashboard-metrics-scraper-5657497c4c-jdnrv Namespace:kubernetes-dashboard ID:ebc5b52f310931b2d5e9aaa2e39a122b924a803f96d95e72420508353cd48e82 UID:2f05d7ef-2bc7-4f9a-ad13-f2c7509732b0 NetNS:/var/run/netns/8d9001e0-7f71-46c9-af07-6056d48c742a Networks:[] RuntimeConfig:map[k8s-pod-network:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath:kubepods-besteffort-pod2f05d7ef_2bc7_4f9a_ad13_f2c7509732b0.slice PodAnnotations:0xc00007a3f8}] Aliases:map[]}  file="ocicni/ocicni.go:795"
INFO[2024-01-22 09:22:53.947823063Z] Checking pod kubernetes-dashboard_dashboard-metrics-scraper-5657497c4c-jdnrv for CNI network k8s-pod-network (type=calico)  file="ocicni/ocicni.go:695"
INFO[2024-01-22 09:22:53.948107043Z] Got pod network &{Name:kubernetes-dashboard-78f87ddfc-k9ll4 Namespace:kubernetes-dashboard ID:5b176d7776e68856adc88c494409479d4b0d4d759787489ecc91f355b0d2f427 UID:ea501514-fe2d-4327-b37a-64c29b64157e NetNS:/var/run/netns/c8af1ec5-5c8a-459c-b5e6-c0f3d7e452d1 Networks:[] RuntimeConfig:map[k8s-pod-network:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath:kubepods-besteffort-podea501514_fe2d_4327_b37a_64c29b64157e.slice PodAnnotations:0xc00007a770}] Aliases:map[]}  file="ocicni/ocicni.go:795"
INFO[2024-01-22 09:22:53.948286551Z] Checking pod kubernetes-dashboard_kubernetes-dashboard-78f87ddfc-k9ll4 for CNI network k8s-pod-network (type=calico)  file="ocicni/ocicni.go:695"
DEBU[2024-01-22 09:22:53.948556457Z] Sandboxes: [0xc00057d180 0xc00057cfc0 0xc00057ce00 0xc00057cc40 0xc00057d500 0xc00057d340]  file="server/server.go:550"
INFO[2024-01-22 09:22:53.948683400Z] Registered SIGHUP reload watcher              file="server/server.go:602"
DEBU[2024-01-22 09:22:53.948708998Z] Metrics are disabled                          file="server/server.go:560"
INFO[2024-01-22 09:22:53.948740338Z] Starting seccomp notifier watcher             file="server/server_linux.go:19"
INFO[2024-01-22 09:22:53.948772254Z] Create NRI interface                          file="nri/nri.go:94"
INFO[2024-01-22 09:22:53.948790228Z] NRI interface is disabled in the configuration.  file="nri/nri.go:101"
DEBU[2024-01-22 09:22:53.950076520Z] monitoring "/usr/share/containers/oci/hooks.d" for hooks  file="hooks/monitor.go:43"
DEBU[2024-01-22 09:23:01.295286679Z] Using /sbin/apparmor_parser binary            file="supported/supported.go:61"

However, it would be useful to capture the runtime configuration at startup, so that it will also be captured as part of the dedicated CRI-O process log or a system log. This is also useful should the log be forwarded to an external log storage or indexing service.

Granted, the crio status config (from CRI-O release 1.29 onwards) can be used to capture the current runtime configuration, which would be shown as a TOML file, but even though this functionality is available, a lot of users do not know about it or simply forgets about it when submitting a bug report of a feature request—it's often too late then to collect initial runtime configuration data, as the CRI-O process might have been restarted or crashed, etc.

Thus, similarly to the familiar kubelet from Kubernetes, we should add an ability for CRI-O to display its runtime configuration on startup, so that we can ensure that it will be captured for reference later via a log file somewhere, but also should CRI-O be run interactively in the current terminal, then the user will have an immediate insight into current settings and each option value.

The crio status config command mentioned earlier behind the scenes works as an HTTP client connecting over CRI-O's exposed control Unix socket, and as such, it would be equivalent to making a request using curl (or any other HTTP client capable of establishing a connection over a Unix socket), for example:

(making a request manually using curl -v --unix-socket /var/run/crio/crio.sock http://0/config ...)

# curl -v --unix-socket /var/run/crio/crio.sock http://0/config
*   Trying /var/run/crio/crio.sock:0...
* Connected to 0.0.0.0 (/var/run/crio/crio.sock) port 80 (#0)
> GET /config HTTP/1.1
> Host: 0.0.0.0
> User-Agent: curl/7.81.0
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Content-Type: application/toml
< Date: Mon, 22 Jan 2024 10:05:03 GMT
< Transfer-Encoding: chunked
< 
[crio]
  root = "/var/lib/containers/storage"
  runroot = "/run/containers/storage"
  imagestore = ""
  storage_driver = "overlay"
  log_dir = "/var/log/crio/pods"
  version_file = "/var/run/crio/version"
  version_file_persist = ""
  clean_shutdown_file = "/var/lib/crio/clean.shutdown"
  internal_wipe = true
  internal_repair = false
  [crio.api]
    grpc_max_send_msg_size = 83886080
    grpc_max_recv_msg_size = 83886080
    listen = "/var/run/crio/crio.sock"
    stream_address = "127.0.0.1"
    stream_port = "0"
    stream_enable_tls = false
    stream_tls_cert = ""
    stream_tls_key = ""
    stream_tls_ca = ""
    stream_idle_timeout = ""
  [crio.runtime]
    seccomp_use_default_when_empty = true
    no_pivot = false
    selinux = false
    log_to_journald = false
    drop_infra_ctr = true
    read_only = false
    hooks_dir = ["/usr/share/containers/oci/hooks.d"]
    default_capabilities = ["CHOWN", "DAC_OVERRIDE", "FSETID", "FOWNER", "SETGID", "SETUID", "SETPCAP", "NET_BIND_SERVICE", "KILL"]
    add_inheritable_capabilities = false
    allowed_devices = ["/dev/fuse"]
    cdi_spec_dirs = ["/etc/cdi", "/var/run/cdi"]
    device_ownership_from_security_context = false
    default_runtime = "crun"
    decryption_keys_path = "/etc/crio/keys/"
    conmon = ""
    conmon_cgroup = ""
    seccomp_profile = ""
    apparmor_profile = "crio-default"
    blockio_config_file = ""
    blockio_reload = false
    irqbalance_config_file = "/etc/sysconfig/irqbalance"
    rdt_config_file = ""
    cgroup_manager = "systemd"
    default_mounts_file = ""
    container_exits_dir = "/var/run/crio/exits"
    container_attach_socket_dir = "/var/run/crio"
    bind_mount_prefix = ""
    uid_mappings = ""
    minimum_mappable_uid = -1
    gid_mappings = ""
    minimum_mappable_gid = -1
    log_level = "info"
    log_filter = ""
    namespaces_dir = "/var/run"
    pinns_path = "/usr/bin/pinns"
    enable_criu_support = false
    pids_limit = -1
    log_size_max = -1
    ctr_stop_timeout = 30
    separate_pull_cgroup = ""
    infra_ctr_cpuset = ""
    shared_cpuset = ""
    enable_pod_events = false
    irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
    hostnetwork_disable_selinux = true
    disable_hostport_mapping = false
    [crio.runtime.runtimes]
      [crio.runtime.runtimes.crun]
        runtime_config_path = ""
        runtime_path = "/usr/bin/crun"
        runtime_type = ""
        runtime_root = ""
        allowed_annotations = ["io.containers.trace-syscall"]
        DisallowedAnnotations = ["io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw", "io.kubernetes.cri-o.Devices", "cpu-quota.crio.io", "io.kubernetes.cri-o.TrySkipVolumeSELinuxLabel", "cpu-freq-governor.crio.io", "io.kubernetes.cri-o.seccompNotifierAction", "io.kubernetes.cri-o.userns-mode", "io.kubernetes.cri-o.UnifiedCgroup", "io.kubernetes.cri-o.ShmSize", "irq-load-balancing.crio.io", "cpu-shared.crio.io", "io.kubernetes.cri-o.umask", "io.kubernetes.cri-o.PodLinuxOverhead", "io.kubernetes.cri-o.PodLinuxResources", "cpu-load-balancing.crio.io", "io.kubernetes.cri.rdt-class", "cpu-c-states.crio.io", "io.kubernetes.cri-o.LinkLogs"]
        monitor_path = "/usr/bin/conmon"
        monitor_cgroup = "system.slice"
  [crio.image]
    default_transport = "docker://"
    global_auth_file = ""
    pause_image = "registry.k8s.io/pause:3.9"
    pause_image_auth_file = ""
    pause_command = "/pause"
    pinned_images = ["registry.k8s.io/pause:3.9"]
    signature_policy = ""
    signature_policy_dir = "/etc/crio/policies"
    insecure_registries = ["127.0.0.0/8"]
    image_volumes = "mkdir"
    big_files_temporary_dir = ""
  [crio.network]
    cni_default_network = ""
    network_dir = "/etc/cni/net.d/"
    plugin_dirs = ["/opt/cni/bin/"]
  [crio.metrics]
    enable_metrics = false
    metrics_collectors = ["operations", "operations_latency_microseconds_total", "operations_latency_microseconds", "operations_errors", "image_pulls_by_digest", "image_pulls_by_name", "image_pulls_by_name_skipped", "image_pulls_failures", "image_pulls_successes", "image_pulls_layer_size", "image_layer_reuse", "containers_events_dropped_total", "containers_oom_total", "containers_oom", "processes_defunct", "operations_total", "operations_latency_seconds", "operations_latency_seconds_total", "operations_errors_total", "image_pulls_bytes_total", "image_pulls_skipped_bytes_total", "image_pulls_failure_total", "image_pulls_success_total", "image_layer_reuse_total", "containers_oom_count_total", "containers_seccomp_notifier_count_total", "resources_stalled_at_stage"]
    metrics_port = 9090
    metrics_socket = ""
    metrics_cert = ""
    metrics_key = ""
  [crio.tracing]
    enable_tracing = false
    tracing_endpoint = "0.0.0.0:4317"
    tracing_sampling_rate_per_million = 0
  [crio.stats]
    stats_collection_period = 0
  [crio.nri]
    enable_nri = false
    nri_listen = "/var/run/nri/nri.sock"
    nri_plugin_dir = "/opt/nri/plugins"
    nri_plugin_config_dir = "/etc/nri/conf.d"
    nri_plugin_registration_timeout = "5s"
    nri_plugin_request_timeout = "2s"
    nri_disable_connections = false
* Connection #0 to host 0.0.0.0 left intact

The relevant code, which also showcases how the runtime configuration has been serialised, can be found in the following:

(the code below comes from the func (s *Server) GetExtendInterfaceMux(bool) *chi.Mux function)

cri-o/server/inspect.go

Lines 135 to 146 in 91816d7

mux.Get(InspectConfigEndpoint, http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) {
b, err := s.config.ToBytes()
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
w.Header().Set("Content-Type", "application/toml")
if _, err := w.Write(b); err != nil {
logrus.Errorf("Unable to write response TOML: %v", err)
}
}))

cri-o/pkg/config/config.go

Lines 799 to 811 in 91816d7

func (c *Config) ToBytes() ([]byte, error) {
var buffer bytes.Buffer
e := toml.NewEncoder(&buffer)
tc := tomlConfig{}
tc.fromConfig(c)
if err := e.Encode(tc); err != nil {
return nil, err
}
return buffer.Bytes(), nil
}

The question would be how the output of the runtime configuration should look like? Should it be similar to how kubelet prints its configuration? Or perhaps a pretty printer such as go-spew (no longer actively maintained, sadly) would print the values? Or, should it be the style of the Printf() function from the built-in fmt package, such as the %#v and %+v format strings? The most one would need to be picked up—or a custom format and/or a printer could be introduced at this point.

The obvious place where to put the code to print the startup configuration would be to follow the configuration validation:

(the code below comes from the app.Action callback)

cri-o/cmd/crio/main.go

Lines 265 to 269 in 91816d7

// Validate the configuration during runtime
if err := config.Validate(true); err != nil {
cancel()
return err
}

However, this wouldn't service the configuration hot reload (following the SIGHUP signal being sent to the CRI-O process), which would potentially include changes to the current runtime configuration. Handling both cases is potentially one of the challenges here (aside from the output format for the configuration when printing it on the screen).

@kwilczynski
Copy link
Member Author

@kwilczynski
Copy link
Member Author

/help
/good-first-issue

Copy link
Contributor

openshift-ci bot commented Jan 22, 2024

@kwilczynski:
This request has been marked as suitable for new contributors.

Please ensure the request meets the requirements listed here.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-good-first-issue command.

In response to this:

/help
/good-first-issue

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci openshift-ci bot added good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. labels Jan 22, 2024
@LenkaSeg
Copy link
Contributor

I'd be interested in taking this issue!

@kwilczynski
Copy link
Member Author

@LenkaSeg, it's all yours!

Feel free to reach out if you have any questions. Happy to help clarify things, etc, etc.

@kannon92
Copy link
Contributor

So any reason why we shouldn't just log the config to disk? similar to /etc/kubernetes/kubeletconfig.yaml?
I am thinking of sos reports or gathers where we can write the config to disk and not worry about these logs being roated.

@kwilczynski
Copy link
Member Author

@kannon92, we could save the current runtime configuration after startup and following a reload, to some runtime location such as /var/run/crio/crio.toml? (or crio.conf).

@kannon92
Copy link
Contributor

I'd think that would be useful. Or maybe /etc/crio/crio.toml? Trying to follow other kube configs (like storage).

@kwilczynski
Copy link
Member Author

@kannon92, I wanted to avoid /etc/crio to make a clear differentiation between configuration and runtime configuration (also, not to load it by accident, or discourage people from renaming it to crio.conf to enable loading it, etc.), but perhaps it would be fine.

@haircommander
Copy link
Member

I think printing the internal representation of the config is a good idea. a user can piece it together but they have to take into account env vars, cli flags and multiple dropin files. a user can alrady print with crio status config or crio config but printing in the logs sounds useful

@kwilczynski
Copy link
Member Author

Since we mentioned kubelet, this is how it currently does it. For reference only:

  • Command-line flags
I0206 18:45:43.126776   92170 flags.go:64] FLAG: --address="0.0.0.0"
I0206 18:45:43.126972   92170 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]"
I0206 18:45:43.127108   92170 flags.go:64] FLAG: --anonymous-auth="true"
I0206 18:45:43.127202   92170 flags.go:64] FLAG: --application-metrics-count-limit="100"
I0206 18:45:43.127320   92170 flags.go:64] FLAG: --authentication-token-webhook="false"
I0206 18:45:43.127427   92170 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s"
I0206 18:45:43.127541   92170 flags.go:64] FLAG: --authorization-mode="AlwaysAllow"
I0206 18:45:43.127660   92170 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s"
I0206 18:45:43.127769   92170 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s"
I0206 18:45:43.127863   92170 flags.go:64] FLAG: --azure-container-registry-config=""
I0206 18:45:43.127968   92170 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id"
I0206 18:45:43.128075   92170 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/bootstrap-kubelet.conf"
I0206 18:45:43.128167   92170 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki"
I0206 18:45:43.128266   92170 flags.go:64] FLAG: --cgroup-driver="cgroupfs"
I0206 18:45:43.128366   92170 flags.go:64] FLAG: --cgroup-root=""
I0206 18:45:43.128454   92170 flags.go:64] FLAG: --cgroups-per-qos="true"
I0206 18:45:43.128562   92170 flags.go:64] FLAG: --client-ca-file=""
I0206 18:45:43.128623   92170 flags.go:64] FLAG: --cloud-config=""
I0206 18:45:43.128677   92170 flags.go:64] FLAG: --cloud-provider=""
I0206 18:45:43.128754   92170 flags.go:64] FLAG: --cluster-dns="[]"
I0206 18:45:43.128840   92170 flags.go:64] FLAG: --cluster-domain=""
I0206 18:45:43.128916   92170 flags.go:64] FLAG: --config="/var/lib/kubelet/config.yaml"
I0206 18:45:43.128976   92170 flags.go:64] FLAG: --config-dir=""
I0206 18:45:43.129036   92170 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json"
I0206 18:45:43.129094   92170 flags.go:64] FLAG: --container-log-max-files="5"
I0206 18:45:43.129157   92170 flags.go:64] FLAG: --container-log-max-size="10Mi"
I0206 18:45:43.129215   92170 flags.go:64] FLAG: --container-runtime-endpoint="unix:///var/run/crio/crio.sock"
I0206 18:45:43.129273   92170 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock"
I0206 18:45:43.129333   92170 flags.go:64] FLAG: --containerd-namespace="k8s.io"
I0206 18:45:43.129390   92170 flags.go:64] FLAG: --contention-profiling="false"
I0206 18:45:43.129450   92170 flags.go:64] FLAG: --cpu-cfs-quota="true"
I0206 18:45:43.129511   92170 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms"
I0206 18:45:43.129571   92170 flags.go:64] FLAG: --cpu-manager-policy="none"
I0206 18:45:43.129634   92170 flags.go:64] FLAG: --cpu-manager-policy-options=""
I0206 18:45:43.129727   92170 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s"
I0206 18:45:43.129786   92170 flags.go:64] FLAG: --enable-controller-attach-detach="true"
I0206 18:45:43.129841   92170 flags.go:64] FLAG: --enable-debugging-handlers="true"
I0206 18:45:43.129905   92170 flags.go:64] FLAG: --enable-load-reader="false"
I0206 18:45:43.129960   92170 flags.go:64] FLAG: --enable-server="true"
I0206 18:45:43.130015   92170 flags.go:64] FLAG: --enforce-node-allocatable="[pods]"
I0206 18:45:43.130077   92170 flags.go:64] FLAG: --event-burst="100"
I0206 18:45:43.130133   92170 flags.go:64] FLAG: --event-qps="50"
I0206 18:45:43.130189   92170 flags.go:64] FLAG: --event-storage-age-limit="default=0"
I0206 18:45:43.130251   92170 flags.go:64] FLAG: --event-storage-event-limit="default=0"
I0206 18:45:43.130307   92170 flags.go:64] FLAG: --eviction-hard=""
I0206 18:45:43.130366   92170 flags.go:64] FLAG: --eviction-max-pod-grace-period="0"
I0206 18:45:43.130423   92170 flags.go:64] FLAG: --eviction-minimum-reclaim=""
I0206 18:45:43.130488   92170 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s"
I0206 18:45:43.130547   92170 flags.go:64] FLAG: --eviction-soft=""
I0206 18:45:43.130602   92170 flags.go:64] FLAG: --eviction-soft-grace-period=""
I0206 18:45:43.130656   92170 flags.go:64] FLAG: --exit-on-lock-contention="false"
I0206 18:45:43.130713   92170 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false"
I0206 18:45:43.130769   92170 flags.go:64] FLAG: --experimental-mounter-path=""
I0206 18:45:43.130863   92170 flags.go:64] FLAG: --fail-swap-on="true"
I0206 18:45:43.130914   92170 flags.go:64] FLAG: --feature-gates=""
I0206 18:45:43.130971   92170 flags.go:64] FLAG: --file-check-frequency="20s"
I0206 18:45:43.131028   92170 flags.go:64] FLAG: --global-housekeeping-interval="1m0s"
I0206 18:45:43.131089   92170 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge"
I0206 18:45:43.131149   92170 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1"
I0206 18:45:43.131206   92170 flags.go:64] FLAG: --healthz-port="10248"
I0206 18:45:43.131264   92170 flags.go:64] FLAG: --help="false"
I0206 18:45:43.131317   92170 flags.go:64] FLAG: --hostname-override=""
I0206 18:45:43.131376   92170 flags.go:64] FLAG: --housekeeping-interval="10s"
I0206 18:45:43.131430   92170 flags.go:64] FLAG: --http-check-frequency="20s"
I0206 18:45:43.131494   92170 flags.go:64] FLAG: --image-credential-provider-bin-dir=""
I0206 18:45:43.131550   92170 flags.go:64] FLAG: --image-credential-provider-config=""
I0206 18:45:43.131605   92170 flags.go:64] FLAG: --image-gc-high-threshold="85"
I0206 18:45:43.131663   92170 flags.go:64] FLAG: --image-gc-low-threshold="80"
I0206 18:45:43.131719   92170 flags.go:64] FLAG: --image-service-endpoint=""
I0206 18:45:43.131769   92170 flags.go:64] FLAG: --iptables-drop-bit="15"
I0206 18:45:43.131812   92170 flags.go:64] FLAG: --iptables-masquerade-bit="14"
I0206 18:45:43.131871   92170 flags.go:64] FLAG: --keep-terminated-pod-volumes="false"
I0206 18:45:43.131940   92170 flags.go:64] FLAG: --kernel-memcg-notification="false"
I0206 18:45:43.131998   92170 flags.go:64] FLAG: --kube-api-burst="100"
I0206 18:45:43.132049   92170 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf"
I0206 18:45:43.132130   92170 flags.go:64] FLAG: --kube-api-qps="50"
I0206 18:45:43.132185   92170 flags.go:64] FLAG: --kube-reserved=""
I0206 18:45:43.132245   92170 flags.go:64] FLAG: --kube-reserved-cgroup=""
I0206 18:45:43.132299   92170 flags.go:64] FLAG: --kubeconfig="/etc/kubernetes/kubelet.conf"
I0206 18:45:43.132358   92170 flags.go:64] FLAG: --kubelet-cgroups=""
I0206 18:45:43.132415   92170 flags.go:64] FLAG: --local-storage-capacity-isolation="true"
I0206 18:45:43.132482   92170 flags.go:64] FLAG: --lock-file=""
I0206 18:45:43.132539   92170 flags.go:64] FLAG: --log-cadvisor-usage="false"
I0206 18:45:43.132599   92170 flags.go:64] FLAG: --log-flush-frequency="5s"
I0206 18:45:43.132657   92170 flags.go:64] FLAG: --log-json-info-buffer-size="0"
I0206 18:45:43.132716   92170 flags.go:64] FLAG: --log-json-split-stream="false"
I0206 18:45:43.132768   92170 flags.go:64] FLAG: --logging-format="text"
I0206 18:45:43.132823   92170 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id"
I0206 18:45:43.132876   92170 flags.go:64] FLAG: --make-iptables-util-chains="true"
I0206 18:45:43.132939   92170 flags.go:64] FLAG: --manifest-url=""
I0206 18:45:43.133006   92170 flags.go:64] FLAG: --manifest-url-header=""
I0206 18:45:43.133061   92170 flags.go:64] FLAG: --max-open-files="1000000"
I0206 18:45:43.133132   92170 flags.go:64] FLAG: --max-pods="110"
I0206 18:45:43.133179   92170 flags.go:64] FLAG: --maximum-dead-containers="-1"
I0206 18:45:43.133236   92170 flags.go:64] FLAG: --maximum-dead-containers-per-container="1"
I0206 18:45:43.133285   92170 flags.go:64] FLAG: --memory-manager-policy="None"
I0206 18:45:43.133336   92170 flags.go:64] FLAG: --minimum-container-ttl-duration="0s"
I0206 18:45:43.133399   92170 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s"
I0206 18:45:43.133457   92170 flags.go:64] FLAG: --node-ip="10.0.0.11"
I0206 18:45:43.133506   92170 flags.go:64] FLAG: --node-labels=""
I0206 18:45:43.133563   92170 flags.go:64] FLAG: --node-status-max-images="50"
I0206 18:45:43.133621   92170 flags.go:64] FLAG: --node-status-update-frequency="10s"
I0206 18:45:43.133665   92170 flags.go:64] FLAG: --oom-score-adj="-999"
I0206 18:45:43.133719   92170 flags.go:64] FLAG: --pod-cidr=""
I0206 18:45:43.133775   92170 flags.go:64] FLAG: --pod-infra-container-image="registry.k8s.io/pause:3.9"
I0206 18:45:43.133832   92170 flags.go:64] FLAG: --pod-manifest-path=""
I0206 18:45:43.133887   92170 flags.go:64] FLAG: --pod-max-pids="-1"
I0206 18:45:43.133942   92170 flags.go:64] FLAG: --pods-per-core="0"
I0206 18:45:43.133991   92170 flags.go:64] FLAG: --port="10250"
I0206 18:45:43.134045   92170 flags.go:64] FLAG: --protect-kernel-defaults="false"
I0206 18:45:43.134099   92170 flags.go:64] FLAG: --provider-id=""
I0206 18:45:43.134151   92170 flags.go:64] FLAG: --qos-reserved=""
I0206 18:45:43.134201   92170 flags.go:64] FLAG: --read-only-port="10255"
I0206 18:45:43.134258   92170 flags.go:64] FLAG: --register-node="true"
I0206 18:45:43.134313   92170 flags.go:64] FLAG: --register-schedulable="true"
I0206 18:45:43.134369   92170 flags.go:64] FLAG: --register-with-taints=""
I0206 18:45:43.134435   92170 flags.go:64] FLAG: --registry-burst="10"
I0206 18:45:43.134484   92170 flags.go:64] FLAG: --registry-qps="5"
I0206 18:45:43.134542   92170 flags.go:64] FLAG: --reserved-cpus=""
I0206 18:45:43.134588   92170 flags.go:64] FLAG: --reserved-memory=""
I0206 18:45:43.134642   92170 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf"
I0206 18:45:43.134696   92170 flags.go:64] FLAG: --root-dir="/var/lib/kubelet"
I0206 18:45:43.134747   92170 flags.go:64] FLAG: --rotate-certificates="false"
I0206 18:45:43.134797   92170 flags.go:64] FLAG: --rotate-server-certificates="false"
I0206 18:45:43.134851   92170 flags.go:64] FLAG: --runonce="false"
I0206 18:45:43.134905   92170 flags.go:64] FLAG: --runtime-cgroups=""
I0206 18:45:43.134958   92170 flags.go:64] FLAG: --runtime-request-timeout="2m0s"
I0206 18:45:43.135008   92170 flags.go:64] FLAG: --seccomp-default="false"
I0206 18:45:43.135062   92170 flags.go:64] FLAG: --serialize-image-pulls="true"
I0206 18:45:43.135115   92170 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s"
I0206 18:45:43.135169   92170 flags.go:64] FLAG: --storage-driver-db="cadvisor"
I0206 18:45:43.135217   92170 flags.go:64] FLAG: --storage-driver-host="localhost:8086"
I0206 18:45:43.135287   92170 flags.go:64] FLAG: --storage-driver-password="root"
I0206 18:45:43.135360   92170 flags.go:64] FLAG: --storage-driver-secure="false"
I0206 18:45:43.135424   92170 flags.go:64] FLAG: --storage-driver-table="stats"
I0206 18:45:43.135478   92170 flags.go:64] FLAG: --storage-driver-user="root"
I0206 18:45:43.135533   92170 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s"
I0206 18:45:43.135586   92170 flags.go:64] FLAG: --sync-frequency="1m0s"
I0206 18:45:43.135633   92170 flags.go:64] FLAG: --system-cgroups=""
I0206 18:45:43.135688   92170 flags.go:64] FLAG: --system-reserved=""
I0206 18:45:43.135741   92170 flags.go:64] FLAG: --system-reserved-cgroup=""
I0206 18:45:43.135794   92170 flags.go:64] FLAG: --tls-cert-file=""
I0206 18:45:43.135842   92170 flags.go:64] FLAG: --tls-cipher-suites="[]"
I0206 18:45:43.135905   92170 flags.go:64] FLAG: --tls-min-version=""
I0206 18:45:43.135957   92170 flags.go:64] FLAG: --tls-private-key-file=""
I0206 18:45:43.136010   92170 flags.go:64] FLAG: --topology-manager-policy="none"
I0206 18:45:43.136063   92170 flags.go:64] FLAG: --topology-manager-policy-options=""
I0206 18:45:43.136121   92170 flags.go:64] FLAG: --topology-manager-scope="container"
I0206 18:45:43.136171   92170 flags.go:64] FLAG: --v="6"
I0206 18:45:43.136228   92170 flags.go:64] FLAG: --version="false"
I0206 18:45:43.136284   92170 flags.go:64] FLAG: --vmodule=""
I0206 18:45:43.136348   92170 flags.go:64] FLAG: --volume-plugin-dir="/usr/libexec/kubernetes/kubelet-plugins/volume/exec/"
I0206 18:45:43.136408   92170 flags.go:64] FLAG: --volume-stats-agg-period="1m0s"
  • Configuration
I0206 18:45:43.138073   92170 server.go:278] "KubeletConfiguration" configuration=<
	{
	  "EnableServer": true,
	  "StaticPodPath": "/etc/kubernetes/manifests",
	  "SyncFrequency": "1m0s",
	  "FileCheckFrequency": "20s",
	  "HTTPCheckFrequency": "20s",
	  "StaticPodURL": "",
	  "StaticPodURLHeader": null,
	  "Address": "0.0.0.0",
	  "Port": 10250,
	  "ReadOnlyPort": 0,
	  "VolumePluginDir": "/usr/libexec/kubernetes/kubelet-plugins/volume/exec/",
	  "ProviderID": "",
	  "TLSCertFile": "/var/lib/kubelet/pki/kubelet.crt",
	  "TLSPrivateKeyFile": "/var/lib/kubelet/pki/kubelet.key",
	  "TLSCipherSuites": null,
	  "TLSMinVersion": "",
	  "RotateCertificates": true,
	  "ServerTLSBootstrap": false,
	  "Authentication": {
	    "X509": {
	      "ClientCAFile": "/etc/kubernetes/pki/ca.crt"
	    },
	    "Webhook": {
	      "Enabled": true,
	      "CacheTTL": "2m0s"
	    },
	    "Anonymous": {
	      "Enabled": false
	    }
	  },
	  "Authorization": {
	    "Mode": "Webhook",
	    "Webhook": {
	      "CacheAuthorizedTTL": "5m0s",
	      "CacheUnauthorizedTTL": "30s"
	    }
	  },
	  "RegistryPullQPS": 5,
	  "RegistryBurst": 10,
	  "EventRecordQPS": 50,
	  "EventBurst": 100,
	  "EnableDebuggingHandlers": true,
	  "EnableContentionProfiling": false,
	  "HealthzPort": 10248,
	  "HealthzBindAddress": "127.0.0.1",
	  "OOMScoreAdj": -999,
	  "ClusterDomain": "cluster.local",
	  "ClusterDNS": [
	    "172.17.0.10"
	  ],
	  "StreamingConnectionIdleTimeout": "4h0m0s",
	  "NodeStatusUpdateFrequency": "10s",
	  "NodeStatusReportFrequency": "5m0s",
	  "NodeLeaseDurationSeconds": 40,
	  "ImageMinimumGCAge": "2m0s",
	  "ImageMaximumGCAge": "0s",
	  "ImageGCHighThresholdPercent": 85,
	  "ImageGCLowThresholdPercent": 80,
	  "VolumeStatsAggPeriod": "1m0s",
	  "KubeletCgroups": "",
	  "SystemCgroups": "",
	  "CgroupRoot": "",
	  "CgroupsPerQOS": true,
	  "CgroupDriver": "systemd",
	  "CPUManagerPolicy": "none",
	  "CPUManagerPolicyOptions": null,
	  "CPUManagerReconcilePeriod": "10s",
	  "MemoryManagerPolicy": "None",
	  "TopologyManagerPolicy": "none",
	  "TopologyManagerScope": "container",
	  "TopologyManagerPolicyOptions": null,
	  "QOSReserved": null,
	  "RuntimeRequestTimeout": "2m0s",
	  "HairpinMode": "promiscuous-bridge",
	  "MaxPods": 110,
	  "PodCIDR": "",
	  "PodPidsLimit": -1,
	  "ResolverConfig": "/run/systemd/resolve/resolv.conf",
	  "RunOnce": false,
	  "CPUCFSQuota": true,
	  "CPUCFSQuotaPeriod": "100ms",
	  "MaxOpenFiles": 1000000,
	  "NodeStatusMaxImages": 50,
	  "ContentType": "application/vnd.kubernetes.protobuf",
	  "KubeAPIQPS": 50,
	  "KubeAPIBurst": 100,
	  "SerializeImagePulls": true,
	  "MaxParallelImagePulls": null,
	  "EvictionHard": {
	    "imagefs.available": "15%",
	    "memory.available": "100Mi",
	    "nodefs.available": "10%",
	    "nodefs.inodesFree": "5%"
	  },
	  "EvictionSoft": null,
	  "EvictionSoftGracePeriod": null,
	  "EvictionPressureTransitionPeriod": "5m0s",
	  "EvictionMaxPodGracePeriod": 0,
	  "EvictionMinimumReclaim": null,
	  "PodsPerCore": 0,
	  "EnableControllerAttachDetach": true,
	  "ProtectKernelDefaults": false,
	  "MakeIPTablesUtilChains": true,
	  "IPTablesMasqueradeBit": 14,
	  "IPTablesDropBit": 15,
	  "FeatureGates": null,
	  "FailSwapOn": true,
	  "MemorySwap": {
	    "SwapBehavior": ""
	  },
	  "ContainerLogMaxSize": "10Mi",
	  "ContainerLogMaxFiles": 5,
	  "ConfigMapAndSecretChangeDetectionStrategy": "Watch",
	  "AllowedUnsafeSysctls": null,
	  "KernelMemcgNotification": false,
	  "SystemReserved": null,
	  "KubeReserved": null,
	  "SystemReservedCgroup": "",
	  "KubeReservedCgroup": "",
	  "EnforceNodeAllocatable": [
	    "pods"
	  ],
	  "ReservedSystemCPUs": "",
	  "ShowHiddenMetricsForVersion": "",
	  "Logging": {
	    "format": "text",
	    "flushFrequency": "5s",
	    "verbosity": 6,
	    "options": {
	      "json": {
	        "infoBufferSize": "0"
	      }
	    }
	  },
	  "EnableSystemLogHandler": true,
	  "EnableSystemLogQuery": false,
	  "ShutdownGracePeriod": "1h0m0s",
	  "ShutdownGracePeriodCriticalPods": "30m0s",
	  "ShutdownGracePeriodByPodPriority": null,
	  "ReservedMemory": null,
	  "EnableProfilingHandler": true,
	  "EnableDebugFlagsHandler": true,
	  "SeccompDefault": false,
	  "MemoryThrottlingFactor": 0.9,
	  "RegisterWithTaints": null,
	  "RegisterNode": true,
	  "Tracing": null,
	  "LocalStorageCapacityIsolation": true,
	  "ContainerRuntimeEndpoint": "unix:///var/run/crio/crio.sock",
	  "ImageServiceEndpoint": ""
	}
 >

There are also common Go runtime options and environment variables printed, which is a nice touch.

LenkaSeg added a commit to LenkaSeg/cri-o that referenced this issue Feb 14, 2024
Part od the issue cri-o#7704.
Function LogConfig prints current config at startup,
following a kubelet example.

Signed-off-by: Lenka Segura <lsegura@redhat.com>
@LenkaSeg LenkaSeg linked a pull request Feb 14, 2024 that will close this issue
LenkaSeg added a commit to LenkaSeg/cri-o that referenced this issue Feb 16, 2024
Part od the issue cri-o#7704.
Function LogConfig prints current config at startup,
following a kubelet example.

Signed-off-by: Lenka Segura <lsegura@redhat.com>
LenkaSeg added a commit to LenkaSeg/cri-o that referenced this issue Feb 19, 2024
Part od the issue cri-o#7704.
Function LogConfig prints current config at startup,
following a kubelet example.

Signed-off-by: Lenka Segura <lsegura@redhat.com>
LenkaSeg added a commit to LenkaSeg/cri-o that referenced this issue Feb 20, 2024
Part od the issue cri-o#7704.
Function LogConfig prints current config at startup,
following a kubelet example.

Signed-off-by: Lenka Segura <lsegura@redhat.com>
LenkaSeg added a commit to LenkaSeg/cri-o that referenced this issue Mar 1, 2024
Part od the issue cri-o#7704.
Function LogConfig prints current config at startup,
following a kubelet example.

Signed-off-by: Lenka Segura <lsegura@redhat.com>
Copy link

github-actions bot commented Mar 8, 2024

A friendly reminder that this issue had no activity for 30 days.

@github-actions github-actions bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 8, 2024
@sohankunkerkar sohankunkerkar removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 3, 2024
@LenkaSeg
Copy link
Contributor

LenkaSeg commented Apr 5, 2024

The part of this issue which handles exposing the current config and CLI options is here in this PR #7783
It exposes the CLI flags and config at startup and reload.
However, it does not handle the storage.

Would it be ok to split these two things (exposing and storage) into two separate PRs?
In case it's ok, could you please review the abovementioned PR if it's ok like this or needs adjustments?

For the storage part, do we want to persist both config and CLI options? Or are there any suggestions how to go about it?

@kwilczynski @haircommander @sohankunkerkar

@haircommander
Copy link
Member

I personally don't agree with the need to store the current configuration anywhere. I think printing it is fine. the config is reconstructable from the sources cri-o is being configured with. The debugging case of printing the config is sufficient IMO

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants