Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrade systemd from 255 to 256 #2145

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

ader1990
Copy link
Contributor

@ader1990 ader1990 commented Jul 23, 2024

Upgrade systemd from 255 to 256

Fixes: flatcar/Flatcar#1501

Testing done

[Describe the testing you have done before submitting this PR. Please include both the commands you issued as well as the output you got.]

  • Changelog entries added in the respective changelog/ directory (user-facing change, bug fix, security fix, update)
  • Inspected CI output for image differences: /boot and /usr size, packages, list files for any missing binaries, kernel modules, config files, kernel modules, etc.

Patches required on other subprojects:

@@ -254,14 +254,11 @@ src_prepare() {
"${FILESDIR}/systemd-test-process-util.patch"
# Flatcar: Adding our own patches here.
"${FILESDIR}/0001-wait-online-set-any-by-default.patch"
"${FILESDIR}/0002-networkd-default-to-kernel-IPForwarding-setting.patch"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be investigated if the 256 changes are making this patch irelevant or not: https://github.com/systemd/systemd-stable/blob/v256/src/network/networkd-network.c#L470

@ader1990
Copy link
Contributor Author

The github actions fail with vm being emergency shelled at boot with the error: systemd: system is tainted: unmerged bin, needs investigation.

Copy link

github-actions bot commented Jul 23, 2024

Test report for 4134.0.0+nightly-20241025-2100 / amd64 arm64

Platforms tested : qemu_uefi-amd64 qemu_uefi-arm64

ok bpf.execsnoop 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok bpf.local-gadget 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.basic 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

not ok cl.cgroupv1 ❌ Failed: qemu_uefi-amd64 (1, 2, 3, 4, 5); qemu_uefi-arm64 (1, 2, 3, 4, 5)

                Diagnostic output for qemu_uefi-arm64, run 5
    L1: " Error: _harness.go:632: Cluster failed starting machines: machine __0f0c4257-e58e-4d6f-9bc1-61dfb59360cd__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 10.0.0.2:22: connect: n?o route to host"
    L2: "harness.go:602: Found systemd froze execution on machine 0f0c4257-e58e-4d6f-9bc1-61dfb59360cd console_"
    L3: " "
    L4: " Error: _harness.go:632: Cluster failed starting machines: machine __4f8e9493-cccf-48ce-a75e-ef2b867d06b9__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 10.0.0.3:22: connect: n?o route to host"
    L5: "harness.go:602: Found systemd froze execution on machine 4f8e9493-cccf-48ce-a75e-ef2b867d06b9 console_"
    L6: " "
                Diagnostic output for qemu_uefi-amd64, run 5
    L1: " Error: _harness.go:632: Cluster failed starting machines: machine __0f0c4257-e58e-4d6f-9bc1-61dfb59360cd__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 10.0.0.2:22: connect: n?o route to host"
    L2: "harness.go:602: Found systemd froze execution on machine 0f0c4257-e58e-4d6f-9bc1-61dfb59360cd console_"
    L3: " "
    L4: " Error: _harness.go:632: Cluster failed starting machines: machine __4f8e9493-cccf-48ce-a75e-ef2b867d06b9__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 10.0.0.3:22: connect: n?o route to host"
    L5: "harness.go:602: Found systemd froze execution on machine 4f8e9493-cccf-48ce-a75e-ef2b867d06b9 console_"
    L6: " "
                Diagnostic output for qemu_uefi-arm64, run 4
    L1: " Error: _harness.go:632: Cluster failed starting machines: machine __4eeb1e09-4cb2-4ee4-b711-02e5917a08ad__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 10.0.0.4:22: connect: n?o route to host"
    L2: "harness.go:602: Found systemd froze execution on machine 4eeb1e09-4cb2-4ee4-b711-02e5917a08ad console_"
    L3: " "
    L4: " Error: _harness.go:632: Cluster failed starting machines: machine __49927f0e-0194-4b51-a09a-9b31d82980a1__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 10.0.0.4:22: connect: n?o route to host"
    L5: "harness.go:602: Found systemd froze execution on machine 49927f0e-0194-4b51-a09a-9b31d82980a1 console_"
    L6: " "
                Diagnostic output for qemu_uefi-amd64, run 4
    L1: " Error: _harness.go:632: Cluster failed starting machines: machine __4eeb1e09-4cb2-4ee4-b711-02e5917a08ad__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 10.0.0.4:22: connect: n?o route to host"
    L2: "harness.go:602: Found systemd froze execution on machine 4eeb1e09-4cb2-4ee4-b711-02e5917a08ad console_"
    L3: " "
    L4: " Error: _harness.go:632: Cluster failed starting machines: machine __49927f0e-0194-4b51-a09a-9b31d82980a1__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 10.0.0.4:22: connect: n?o route to host"
    L5: "harness.go:602: Found systemd froze execution on machine 49927f0e-0194-4b51-a09a-9b31d82980a1 console_"
    L6: " "
                Diagnostic output for qemu_uefi-arm64, run 3
    L1: " Error: _harness.go:632: Cluster failed starting machines: machine __9de8a2da-ed4a-4b59-873e-1b104fe92a61__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 10.0.0.4:22: connect: n?o route to host"
    L2: "harness.go:602: Found systemd froze execution on machine 9de8a2da-ed4a-4b59-873e-1b104fe92a61 console_"
    L3: " "
    L4: " Error: _harness.go:632: Cluster failed starting machines: machine __259c8ef2-cde6-4df6-8899-8c0764e0d152__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 10.0.0.4:22: connect: n?o route to host"
    L5: "harness.go:602: Found systemd froze execution on machine 259c8ef2-cde6-4df6-8899-8c0764e0d152 console_"
    L6: " "
                Diagnostic output for qemu_uefi-amd64, run 3
    L1: " Error: _harness.go:632: Cluster failed starting machines: machine __9de8a2da-ed4a-4b59-873e-1b104fe92a61__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 10.0.0.4:22: connect: n?o route to host"
    L2: "harness.go:602: Found systemd froze execution on machine 9de8a2da-ed4a-4b59-873e-1b104fe92a61 console_"
    L3: " "
    L4: " Error: _harness.go:632: Cluster failed starting machines: machine __259c8ef2-cde6-4df6-8899-8c0764e0d152__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 10.0.0.4:22: connect: n?o route to host"
    L5: "harness.go:602: Found systemd froze execution on machine 259c8ef2-cde6-4df6-8899-8c0764e0d152 console_"
    L6: " "
                Diagnostic output for qemu_uefi-arm64, run 2
    L1: " Error: _harness.go:632: Cluster failed starting machines: machine __c157831e-0acb-4f48-8c57-f127d6e21ab3__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 10.0.0.2:22: connect: n?o route to host"
    L2: "harness.go:602: Found systemd froze execution on machine c157831e-0acb-4f48-8c57-f127d6e21ab3 console_"
    L3: " "
    L4: " Error: _harness.go:632: Cluster failed starting machines: machine __8b0d997f-d55a-4b19-8690-bd9c2ab36ba1__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 10.0.0.5:22: connect: n?o route to host"
    L5: "harness.go:602: Found systemd froze execution on machine 8b0d997f-d55a-4b19-8690-bd9c2ab36ba1 console_"
    L6: " "
                Diagnostic output for qemu_uefi-amd64, run 2
    L1: " Error: _harness.go:632: Cluster failed starting machines: machine __c157831e-0acb-4f48-8c57-f127d6e21ab3__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 10.0.0.2:22: connect: n?o route to host"
    L2: "harness.go:602: Found systemd froze execution on machine c157831e-0acb-4f48-8c57-f127d6e21ab3 console_"
    L3: " "
    L4: " Error: _harness.go:632: Cluster failed starting machines: machine __8b0d997f-d55a-4b19-8690-bd9c2ab36ba1__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 10.0.0.5:22: connect: n?o route to host"
    L5: "harness.go:602: Found systemd froze execution on machine 8b0d997f-d55a-4b19-8690-bd9c2ab36ba1 console_"
    L6: " "
                Diagnostic output for qemu_uefi-arm64, run 1
    L1: " Error: _cluster.go:82: kolet:"
    L2: "2024-10-29T11:52:25Z kolet: cgroup2 is mounted: {cgroup2 /sys/fs/cgroup cgroup2 [rw seclabel nosuid nodev noexec relatime nsdelegate memory_recursiveprot]}"
    L3: "--- FAIL: cl.cgroupv1/CgroupMounts (0.15s)"
    L4: "cluster.go:85: kolet: Process exited with status 1"
    L5: "harness.go:602: Found systemd froze execution on machine 2cf1f0ae-80ac-4dc2-8ace-3907f4523464 console_"
    L6: " "
    L7: " Error: _harness.go:632: Cluster failed starting machines: machine __a612204a-29ed-44e4-b1a2-6906a82ee6ce__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 10.0.0.114:22: connect:? no route to host"
    L8: "harness.go:602: Found systemd froze execution on machine a612204a-29ed-44e4-b1a2-6906a82ee6ce console_"
    L9: " "
                Diagnostic output for qemu_uefi-amd64, run 1
    L1: " Error: _cluster.go:82: kolet:"
    L2: "2024-10-29T11:52:25Z kolet: cgroup2 is mounted: {cgroup2 /sys/fs/cgroup cgroup2 [rw seclabel nosuid nodev noexec relatime nsdelegate memory_recursiveprot]}"
    L3: "--- FAIL: cl.cgroupv1/CgroupMounts (0.15s)"
    L4: "cluster.go:85: kolet: Process exited with status 1"
    L5: "harness.go:602: Found systemd froze execution on machine 2cf1f0ae-80ac-4dc2-8ace-3907f4523464 console_"
    L6: " "
    L7: " Error: _harness.go:632: Cluster failed starting machines: machine __a612204a-29ed-44e4-b1a2-6906a82ee6ce__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 10.0.0.114:22: connect:? no route to host"
    L8: "harness.go:602: Found systemd froze execution on machine a612204a-29ed-44e4-b1a2-6906a82ee6ce console_"
    L9: " "

ok cl.cloudinit.basic 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.cloudinit.multipart-mime 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.cloudinit.script 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.disk.raid0.data 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.disk.raid0.root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.disk.raid1.data 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.disk.raid1.root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.etcd-member.discovery 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.etcd-member.etcdctlv3 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.etcd-member.v2-backup-restore 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.filesystem 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.flannel.udp 🟢 Succeeded: qemu_uefi-amd64 (1)

ok cl.flannel.vxlan 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.instantiated.enable-unit 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.kargs 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.luks 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.indirect 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.indirect.new 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.regular 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.regular.new 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.reuse 🟢 Succeeded: qemu_uefi-amd64 (2); qemu_uefi-arm64 (1) ❌ Failed: qemu_uefi-amd64 (1)

                Diagnostic output for qemu_uefi-amd64, run 1
    L1: " Error: _oem.go:199: Couldn_t reboot machine: machine __126b31ad-83ff-4eb4-99fd-6cc1f72bf8dd__ failed basic checks: some systemd units failed:"
    L2: "??? ldconfig.service loaded failed failed Rebuild Dynamic Linker Cache"
    L3: "status: "
    L4: "journal:-- No entries --_"
    L5: " "
    L6: "  "

ok cl.ignition.oem.wipe 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.partition_on_boot_disk 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.symlink 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.translation 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.btrfsroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.ext4root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.groups 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.once 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.sethostname 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.users 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.xfsroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2.btrfsroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2.ext4root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2.users 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2.xfsroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2_1.ext4checkexisting 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2_1.swap 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2_1.vfat 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.install.cloudinit 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.internet 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.locksmith.cluster 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.misc.falco 🟢 Succeeded: qemu_uefi-amd64 (1)

ok cl.network.initramfs.second-boot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.network.listeners 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.network.wireguard 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.omaha.ping 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.osreset.ignition-rerun 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.overlay.cleanup 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.swap_activation 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.sysext.boot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.sysext.fallbackdownload # SKIP 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.tang.nonroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.tang.root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.toolbox.dnf-install 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.tpm.eventlog 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.tpm.nonroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.tpm.root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.tpm.root-cryptenroll 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.tpm.root-cryptenroll-pcr-noupdate 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.tpm.root-cryptenroll-pcr-withupdate 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.update.badverity 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.update.grubnop 🟢 Succeeded: qemu_uefi-amd64 (1)

ok cl.update.reboot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.users.shells 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.verity 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.auth.verify 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.groups 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.once 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.resource.local 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.resource.remote 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.resource.s3.versioned 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.security.tls 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.sethostname 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.systemd.enable-service 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.locksmith.reboot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.locksmith.tls 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.selinux.boolean 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.selinux.enforce 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.tls.fetch-urls 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.update.badusr 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok devcontainer.docker 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok devcontainer.systemd-nspawn 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.btrfs-storage 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.containerd-restart 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.enable-service.sysext 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.lib-coreos-dockerd-compat 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.network-openbsd-nc 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.selinux 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.userns 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.29.2.calico.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

not ok kubeadm.v1.29.2.calico.cgroupv1.base ❌ Failed: qemu_uefi-amd64 (1, 2, 3, 4, 5); qemu_uefi-arm64 (1, 2, 3, 4, 5)

                Diagnostic output for qemu_uefi-arm64, run 5
    L1: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __d95a8cb2-ff91-418b-b33d-6c74e0795e6d__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.8:22: connect: no route to host"
    L2: "harness.go:602: Found systemd froze execution on machine d95a8cb2-ff91-418b-b33d-6c74e0795e6d console_"
    L3: " "
    L4: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __e91b966c-30a6-48a8-8ec9-d3bba4b2d3a0__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.8:22: connect: no route to host"
    L5: "harness.go:602: Found systemd froze execution on machine e91b966c-30a6-48a8-8ec9-d3bba4b2d3a0 console_"
    L6: " "
                Diagnostic output for qemu_uefi-amd64, run 5
    L1: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __d95a8cb2-ff91-418b-b33d-6c74e0795e6d__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.8:22: connect: no route to host"
    L2: "harness.go:602: Found systemd froze execution on machine d95a8cb2-ff91-418b-b33d-6c74e0795e6d console_"
    L3: " "
    L4: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __e91b966c-30a6-48a8-8ec9-d3bba4b2d3a0__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.8:22: connect: no route to host"
    L5: "harness.go:602: Found systemd froze execution on machine e91b966c-30a6-48a8-8ec9-d3bba4b2d3a0 console_"
    L6: " "
                Diagnostic output for qemu_uefi-arm64, run 4
    L1: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __a733efd0-6fec-4300-86db-0457b2fc087e__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.8:22: connect: no route to host"
    L2: "harness.go:602: Found systemd froze execution on machine a733efd0-6fec-4300-86db-0457b2fc087e console_"
    L3: " "
    L4: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __509b089b-0e9e-4020-8f74-579ff55f6f9c__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.7:22: connect: no route to host"
    L5: "harness.go:602: Found systemd froze execution on machine 509b089b-0e9e-4020-8f74-579ff55f6f9c console_"
    L6: " "
                Diagnostic output for qemu_uefi-amd64, run 4
    L1: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __a733efd0-6fec-4300-86db-0457b2fc087e__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.8:22: connect: no route to host"
    L2: "harness.go:602: Found systemd froze execution on machine a733efd0-6fec-4300-86db-0457b2fc087e console_"
    L3: " "
    L4: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __509b089b-0e9e-4020-8f74-579ff55f6f9c__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.7:22: connect: no route to host"
    L5: "harness.go:602: Found systemd froze execution on machine 509b089b-0e9e-4020-8f74-579ff55f6f9c console_"
    L6: " "
                Diagnostic output for qemu_uefi-arm64, run 3
    L1: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __e6ed7f65-6960-45d7-9e0d-48737eee5a72__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.7:22: connect: no route to host"
    L2: "harness.go:602: Found systemd froze execution on machine e6ed7f65-6960-45d7-9e0d-48737eee5a72 console_"
    L3: " "
    L4: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __f91593b9-abae-4e7f-b322-00d791786a3e__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.8:22: connect: no route to host"
    L5: "harness.go:602: Found systemd froze execution on machine f91593b9-abae-4e7f-b322-00d791786a3e console_"
    L6: " "
                Diagnostic output for qemu_uefi-amd64, run 3
    L1: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __e6ed7f65-6960-45d7-9e0d-48737eee5a72__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.7:22: connect: no route to host"
    L2: "harness.go:602: Found systemd froze execution on machine e6ed7f65-6960-45d7-9e0d-48737eee5a72 console_"
    L3: " "
    L4: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __f91593b9-abae-4e7f-b322-00d791786a3e__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.8:22: connect: no route to host"
    L5: "harness.go:602: Found systemd froze execution on machine f91593b9-abae-4e7f-b322-00d791786a3e console_"
    L6: " "
                Diagnostic output for qemu_uefi-arm64, run 2
    L1: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __1cfb32e1-000d-4b74-b7ce-fbb43ec8fe25__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.8:22: connect: no route to host"
    L2: "harness.go:602: Found systemd froze execution on machine 1cfb32e1-000d-4b74-b7ce-fbb43ec8fe25 console_"
    L3: " "
    L4: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __9fc7d5f5-fa82-411c-b644-a30ae40aa45f__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.7:22: connect: no route to host"
    L5: "harness.go:602: Found systemd froze execution on machine 9fc7d5f5-fa82-411c-b644-a30ae40aa45f console_"
    L6: " "
                Diagnostic output for qemu_uefi-amd64, run 2
    L1: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __1cfb32e1-000d-4b74-b7ce-fbb43ec8fe25__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.8:22: connect: no route to host"
    L2: "harness.go:602: Found systemd froze execution on machine 1cfb32e1-000d-4b74-b7ce-fbb43ec8fe25 console_"
    L3: " "
    L4: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __9fc7d5f5-fa82-411c-b644-a30ae40aa45f__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.7:22: connect: no route to host"
    L5: "harness.go:602: Found systemd froze execution on machine 9fc7d5f5-fa82-411c-b644-a30ae40aa45f console_"
    L6: " "
                Diagnostic output for qemu_uefi-arm64, run 1
    L1: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __a6245d54-ab71-47a5-b5b0-2b21af40465f__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.139:22: connect: no route to host"
    L2: "harness.go:602: Found systemd froze execution on machine a6245d54-ab71-47a5-b5b0-2b21af40465f console_"
    L3: " "
    L4: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __eb95e477-a037-4fcb-9176-5d0f11039420__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.83:22: connect: no route to host"
    L5: "harness.go:602: Found systemd froze execution on machine eb95e477-a037-4fcb-9176-5d0f11039420 console_"
    L6: " "
                Diagnostic output for qemu_uefi-amd64, run 1
    L1: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __a6245d54-ab71-47a5-b5b0-2b21af40465f__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.139:22: connect: no route to host"
    L2: "harness.go:602: Found systemd froze execution on machine a6245d54-ab71-47a5-b5b0-2b21af40465f console_"
    L3: " "
    L4: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __eb95e477-a037-4fcb-9176-5d0f11039420__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.83:22: connect: no route to host"
    L5: "harness.go:602: Found systemd froze execution on machine eb95e477-a037-4fcb-9176-5d0f11039420 console_"
    L6: " "

ok kubeadm.v1.29.2.cilium.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

not ok kubeadm.v1.29.2.cilium.cgroupv1.base ❌ Failed: qemu_uefi-amd64 (1, 2, 3, 4, 5); qemu_uefi-arm64 (1, 2, 3, 4, 5)

                Diagnostic output for qemu_uefi-arm64, run 5
    L1: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __7fe16f63-ad78-4b5f-9a42-d4694e45cac2__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.7:22: connect: no route to host"
    L2: "harness.go:602: Found systemd froze execution on machine 7fe16f63-ad78-4b5f-9a42-d4694e45cac2 console_"
    L3: " "
    L4: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __7cfbd350-a4f0-4deb-92dc-f610bb722aaf__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.7:22: connect: no route to host"
    L5: "harness.go:602: Found systemd froze execution on machine 7cfbd350-a4f0-4deb-92dc-f610bb722aaf console_"
    L6: " "
                Diagnostic output for qemu_uefi-amd64, run 5
    L1: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __7fe16f63-ad78-4b5f-9a42-d4694e45cac2__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.7:22: connect: no route to host"
    L2: "harness.go:602: Found systemd froze execution on machine 7fe16f63-ad78-4b5f-9a42-d4694e45cac2 console_"
    L3: " "
    L4: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __7cfbd350-a4f0-4deb-92dc-f610bb722aaf__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.7:22: connect: no route to host"
    L5: "harness.go:602: Found systemd froze execution on machine 7cfbd350-a4f0-4deb-92dc-f610bb722aaf console_"
    L6: " "
                Diagnostic output for qemu_uefi-arm64, run 4
    L1: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __7dbdaaa5-ab61-4189-9414-dc7b66f38fef__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.7:22: connect: no route to host"
    L2: "harness.go:602: Found systemd froze execution on machine 7dbdaaa5-ab61-4189-9414-dc7b66f38fef console_"
    L3: " "
    L4: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __e0e4332e-ae94-40d4-a4e2-cad6264a8288__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.8:22: connect: no route to host"
    L5: "harness.go:602: Found systemd froze execution on machine e0e4332e-ae94-40d4-a4e2-cad6264a8288 console_"
    L6: " "
                Diagnostic output for qemu_uefi-amd64, run 4
    L1: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __7dbdaaa5-ab61-4189-9414-dc7b66f38fef__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.7:22: connect: no route to host"
    L2: "harness.go:602: Found systemd froze execution on machine 7dbdaaa5-ab61-4189-9414-dc7b66f38fef console_"
    L3: " "
    L4: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __e0e4332e-ae94-40d4-a4e2-cad6264a8288__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.8:22: connect: no route to host"
    L5: "harness.go:602: Found systemd froze execution on machine e0e4332e-ae94-40d4-a4e2-cad6264a8288 console_"
    L6: " "
                Diagnostic output for qemu_uefi-arm64, run 3
    L1: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __ee9107ac-695d-4d7f-8a9b-c7bd485aa8be__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.8:22: connect: no route to host"
    L2: "harness.go:602: Found systemd froze execution on machine ee9107ac-695d-4d7f-8a9b-c7bd485aa8be console_"
    L3: " "
    L4: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __754c0254-de3a-450e-92e2-eae73c742ea4__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.7:22: connect: no route to host"
    L5: "harness.go:602: Found systemd froze execution on machine 754c0254-de3a-450e-92e2-eae73c742ea4 console_"
    L6: " "
                Diagnostic output for qemu_uefi-amd64, run 3
    L1: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __ee9107ac-695d-4d7f-8a9b-c7bd485aa8be__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.8:22: connect: no route to host"
    L2: "harness.go:602: Found systemd froze execution on machine ee9107ac-695d-4d7f-8a9b-c7bd485aa8be console_"
    L3: " "
    L4: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __754c0254-de3a-450e-92e2-eae73c742ea4__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.7:22: connect: no route to host"
    L5: "harness.go:602: Found systemd froze execution on machine 754c0254-de3a-450e-92e2-eae73c742ea4 console_"
    L6: " "
                Diagnostic output for qemu_uefi-arm64, run 2
    L1: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __0ef544ff-7492-4e60-a279-0d169ce04f29__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.7:22: connect: no route to host"
    L2: "harness.go:602: Found systemd froze execution on machine 0ef544ff-7492-4e60-a279-0d169ce04f29 console_"
    L3: " "
    L4: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __1527d4d1-9d96-47d5-9742-cbd5cb29b484__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.8:22: connect: no route to host"
    L5: "harness.go:602: Found systemd froze execution on machine 1527d4d1-9d96-47d5-9742-cbd5cb29b484 console_"
    L6: " "
                Diagnostic output for qemu_uefi-amd64, run 2
    L1: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __0ef544ff-7492-4e60-a279-0d169ce04f29__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.7:22: connect: no route to host"
    L2: "harness.go:602: Found systemd froze execution on machine 0ef544ff-7492-4e60-a279-0d169ce04f29 console_"
    L3: " "
    L4: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __1527d4d1-9d96-47d5-9742-cbd5cb29b484__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.8:22: connect: no route to host"
    L5: "harness.go:602: Found systemd froze execution on machine 1527d4d1-9d96-47d5-9742-cbd5cb29b484 console_"
    L6: " "
                Diagnostic output for qemu_uefi-arm64, run 1
    L1: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __768ca311-adef-4682-b051-6ec18ab377d5__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.91:22: connect: no route to host"
    L2: "harness.go:602: Found systemd froze execution on machine 768ca311-adef-4682-b051-6ec18ab377d5 console_"
    L3: " "
    L4: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __ddcae49d-5229-40fb-9448-f204ad83b574__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.28:22: connect: no route to host"
    L5: "harness.go:602: Found systemd froze execution on machine ddcae49d-5229-40fb-9448-f204ad83b574 console_"
    L6: " "
                Diagnostic output for qemu_uefi-amd64, run 1
    L1: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __768ca311-adef-4682-b051-6ec18ab377d5__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.91:22: connect: no route to host"
    L2: "harness.go:602: Found systemd froze execution on machine 768ca311-adef-4682-b051-6ec18ab377d5 console_"
    L3: " "
    L4: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __ddcae49d-5229-40fb-9448-f204ad83b574__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.28:22: connect: no route to host"
    L5: "harness.go:602: Found systemd froze execution on machine ddcae49d-5229-40fb-9448-f204ad83b574 console_"
    L6: " "

ok kubeadm.v1.29.2.flannel.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

not ok kubeadm.v1.29.2.flannel.cgroupv1.base ❌ Failed: qemu_uefi-amd64 (1, 2, 3, 4, 5); qemu_uefi-arm64 (1, 2, 3, 4, 5)

                Diagnostic output for qemu_uefi-arm64, run 5
    L1: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __181b67b6-e106-446b-b75e-bf14a07ffd94__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.6:22: connect: no route to host"
    L2: "harness.go:602: Found systemd froze execution on machine 181b67b6-e106-446b-b75e-bf14a07ffd94 console_"
    L3: " "
    L4: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __8348caae-4def-403c-8bb1-ddc4bb62d22c__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.6:22: connect: no route to host"
    L5: "harness.go:602: Found systemd froze execution on machine 8348caae-4def-403c-8bb1-ddc4bb62d22c console_"
    L6: " "
                Diagnostic output for qemu_uefi-amd64, run 5
    L1: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __181b67b6-e106-446b-b75e-bf14a07ffd94__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.6:22: connect: no route to host"
    L2: "harness.go:602: Found systemd froze execution on machine 181b67b6-e106-446b-b75e-bf14a07ffd94 console_"
    L3: " "
    L4: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __8348caae-4def-403c-8bb1-ddc4bb62d22c__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.6:22: connect: no route to host"
    L5: "harness.go:602: Found systemd froze execution on machine 8348caae-4def-403c-8bb1-ddc4bb62d22c console_"
    L6: " "
                Diagnostic output for qemu_uefi-arm64, run 4
    L1: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __837af3f0-9bca-403c-a273-033eed3f1cc2__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.6:22: connect: no route to host"
    L2: "harness.go:602: Found systemd froze execution on machine 837af3f0-9bca-403c-a273-033eed3f1cc2 console_"
    L3: " "
    L4: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __24f7d801-0094-4e96-a102-fffaf3bf44d7__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.6:22: connect: no route to host"
    L5: "harness.go:602: Found systemd froze execution on machine 24f7d801-0094-4e96-a102-fffaf3bf44d7 console_"
    L6: " "
                Diagnostic output for qemu_uefi-amd64, run 4
    L1: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __837af3f0-9bca-403c-a273-033eed3f1cc2__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.6:22: connect: no route to host"
    L2: "harness.go:602: Found systemd froze execution on machine 837af3f0-9bca-403c-a273-033eed3f1cc2 console_"
    L3: " "
    L4: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __24f7d801-0094-4e96-a102-fffaf3bf44d7__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.6:22: connect: no route to host"
    L5: "harness.go:602: Found systemd froze execution on machine 24f7d801-0094-4e96-a102-fffaf3bf44d7 console_"
    L6: " "
                Diagnostic output for qemu_uefi-arm64, run 3
    L1: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __449fba8a-1234-4677-8e9c-286c1fdc4609__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.6:22: connect: no route to host"
    L2: "harness.go:602: Found systemd froze execution on machine 449fba8a-1234-4677-8e9c-286c1fdc4609 console_"
    L3: " "
    L4: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __122438ae-47ae-41b5-8523-13fdd2f98342__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.6:22: connect: no route to host"
    L5: "harness.go:602: Found systemd froze execution on machine 122438ae-47ae-41b5-8523-13fdd2f98342 console_"
    L6: " "
                Diagnostic output for qemu_uefi-amd64, run 3
    L1: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __449fba8a-1234-4677-8e9c-286c1fdc4609__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.6:22: connect: no route to host"
    L2: "harness.go:602: Found systemd froze execution on machine 449fba8a-1234-4677-8e9c-286c1fdc4609 console_"
    L3: " "
    L4: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __122438ae-47ae-41b5-8523-13fdd2f98342__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.6:22: connect: no route to host"
    L5: "harness.go:602: Found systemd froze execution on machine 122438ae-47ae-41b5-8523-13fdd2f98342 console_"
    L6: " "
                Diagnostic output for qemu_uefi-arm64, run 2
    L1: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __d0370eff-ed74-4dd1-988f-139e9fe7ed3e__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.9:22: connect: no route to host"
    L2: "harness.go:602: Found systemd froze execution on machine d0370eff-ed74-4dd1-988f-139e9fe7ed3e console_"
    L3: " "
    L4: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __6501e058-5035-4a85-acc3-e09f808ef119__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.6:22: connect: no route to host"
    L5: "harness.go:602: Found systemd froze execution on machine 6501e058-5035-4a85-acc3-e09f808ef119 console_"
    L6: " "
                Diagnostic output for qemu_uefi-amd64, run 2
    L1: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __d0370eff-ed74-4dd1-988f-139e9fe7ed3e__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.9:22: connect: no route to host"
    L2: "harness.go:602: Found systemd froze execution on machine d0370eff-ed74-4dd1-988f-139e9fe7ed3e console_"
    L3: " "
    L4: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __6501e058-5035-4a85-acc3-e09f808ef119__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.6:22: connect: no route to host"
    L5: "harness.go:602: Found systemd froze execution on machine 6501e058-5035-4a85-acc3-e09f808ef119 console_"
    L6: " "
                Diagnostic output for qemu_uefi-arm64, run 1
    L1: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __0acca582-bb8b-41ae-a450-6134d79f010a__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.25:22: connect: no route to host"
    L2: "harness.go:602: Found systemd froze execution on machine 0acca582-bb8b-41ae-a450-6134d79f010a console_"
    L3: " "
    L4: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __36568b82-de3f-430b-892b-bc6c190a1b38__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.52:22: connect: no route to host"
    L5: "harness.go:602: Found systemd froze execution on machine 36568b82-de3f-430b-892b-bc6c190a1b38 console_"
    L6: " "
                Diagnostic output for qemu_uefi-amd64, run 1
    L1: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __0acca582-bb8b-41ae-a450-6134d79f010a__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.25:22: connect: no route to host"
    L2: "harness.go:602: Found systemd froze execution on machine 0acca582-bb8b-41ae-a450-6134d79f010a console_"
    L3: " "
    L4: " Error: _kubeadm.go:193: unable to setup cluster: unable to create master node with large disk: machine __36568b82-de3f-430b-892b-bc6c190a1b38__ failed to start: ssh journalctl failed: time limit excee?ded: dial tcp 10.0.0.52:22: connect: no route to host"
    L5: "harness.go:602: Found systemd froze execution on machine 36568b82-de3f-430b-892b-bc6c190a1b38 console_"
    L6: " "

ok kubeadm.v1.30.1.calico.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.30.1.cilium.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.30.1.flannel.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.31.0.calico.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.31.0.cilium.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.31.0.flannel.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok linux.nfs.v3 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok linux.nfs.v4 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok linux.ntp 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok misc.fips 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok packages 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok sysext.custom-docker.sysext 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok sysext.custom-oem 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok sysext.disable-containerd 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok sysext.disable-docker 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok sysext.simple 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok systemd.journal.remote 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok systemd.journal.user 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok systemd.sysusers.gshadow 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

@ader1990
Copy link
Contributor Author

ader1990 commented Jul 24, 2024

After some investigation, it seems that ignition-setup.service fails to run because /usr is now read only.

From https://lists.freedesktop.org/archives/systemd-devel/2024-June/050407.html:

        Service Management:

        * New system manager setting ProtectSystem= has been added. It is
          analogous to the unit setting, but applies to the whole system. It is
          enabled by default in the initrd.

          Note that this means that code executed in the initrd cannot naively
          expect to be able to write to /usr/ during boot. This affects
          dracut <= 101, which wrote "hooks" to /lib/dracut/hooks/. See
          https://github.com/dracut-ng/dracut-ng/commit/a45048b80c27ee5a45a380.

dracut-ng/dracut-ng@a45048b80c27ee5a45a380 -> this commit shows how to fix dracut 100, not applicable, as dracut used by Flatcar is an older version 0.53.

But ignition-setup.service fails to run at this line, as /usr is mount as ro: https://github.com/flatcar/bootengine/blob/flatcar-master/dracut/30ignition/ignition-setup.sh#L15.

@ader1990
Copy link
Contributor Author

One option, is, obviously, to disable the default ProtectSystem, as the initrd Flatcar workflow is reliant on writing to rootfs, similar to this, in bootengine, using a dracut module definition, similar to: https://github.com/flatcar/bootengine/blob/flatcar-master/dracut/99switch-root/nocgroup.conf

Another would be to fix all the bootengine /usr writes and maybe move those to /etc or /var.

@ader1990 ader1990 force-pushed the ader1990/systemd-major-version-upgrade-256 branch from 9d88f29 to a3f885c Compare July 25, 2024 07:43
@chewi
Copy link
Contributor

chewi commented Aug 13, 2024

Independently of this, I have tried to update Dracut to 060, and am having trouble with cyclic boot dependencies. I wonder if this is somehow related to the above. Here's what it looks like with the verity stuff disabled, otherwise it's a bit more complicated.

Screenshot_20240813_120607

@ader1990
Copy link
Contributor Author

Independently of this, I have tried to update Dracut to 060, and am having trouble with cyclic boot dependencies. I wonder if this is somehow related to the above. Here's what it looks like with the verity stuff disabled, otherwise it's a bit more complicated.

Screenshot_20240813_120607

Last time I tried a few months ago, I also got the same cyclical dependencies and gave up. Our bootengine heavily modifies the dracut upstream logic, so things need to be modified (again) there to make the dracut upgrade.

@ader1990
Copy link
Contributor Author

Full error for the initrd break point:

journalctl -xeu ignition-setup.service

Sep 25 08:17:15 localhost systemd[1]: Starting ignition-setup.service - Ignition (setup)...
Sep 25 08:17:15 localhost ignition-setup[891]: cp: cannot create regular file '/bin/is-live-image': Read-only file system
Sep 25 08:17:15 localhost systemd[1]: ignition-setup.service: Main process exited, code=exited, status=1/FAILURE
Sep 25 08:17:15 localhost systemd[1]: ignition-setup.service: Failed with result 'exit-code'.
Sep 25 08:17:15 localhost systemd[1]: Failed to start ignition-setup.service - Ignition (setup).
Sep 25 08:17:15 localhost systemd[1]: ignition-setup.service: Triggering OnFailure= dependencies.

@ader1990
Copy link
Contributor Author

To overcome the current limitation imposed by systemd 256 regarding the ignition-setup[891]: cp: cannot create regular file '/bin/is-live-image': Read-only file system, there are two options I can think of:

  1. use /mnt/oem or even /tmp as a temporary place to store the is-live-image but this needs a $PATH change so that it can be found by Ignition -> https://github.com/search?q=repo%3Acoreos%2Fignition%20is-live-image&type=code
  2. Temporarily remount the /usr as mount -o remount,rw /usr, do the required change and then mount -o remount,ro /usr

I am not convinced these two ideas are the best, maybe there is another option?

Thanks.

@ader1990 ader1990 self-assigned this Sep 25, 2024
@ader1990 ader1990 requested a review from a team September 25, 2024 08:36
@ader1990
Copy link
Contributor Author

ader1990 commented Sep 25, 2024

With the is-live-image issue fixed, systemd 256 expects dracut 058 -> see systemd/systemd@1c585a4

Until dracut is updated, we need to revert this commit manually in systemd: systemd/systemd@1c585a4

@chewi
Copy link
Contributor

chewi commented Sep 25, 2024

Updating Dracut has proven tricky, mainly due to size issues, so I wouldn't wait for that.

@ader1990
Copy link
Contributor Author

Updating Dracut has proven tricky, mainly due to size issues, so I wouldn't wait for that.

yeap, will add the patch to systemd ebuild.

@ader1990
Copy link
Contributor Author

ader1990 commented Sep 26, 2024

AMD64 Flatcar running with systemd 256 and linux 6.10:

root@localhost ~ # cat /etc/os-release

NAME="Flatcar Container Linux by Kinvolk"
ID=flatcar
ID_LIKE=coreos
VERSION=4102.0.0+nightly-20240923-2100-26-g730775213c
VERSION_ID=4102.0.0
BUILD_ID=nightly-20240923-2100-26-g730775213c
SYSEXT_LEVEL=1.0
PRETTY_NAME="Flatcar Container Linux by Kinvolk 4102.0.0+nightly-20240923-2100-26-g730775213c (Oklo)"
ANSI_COLOR="38;5;75"
HOME_URL="https://flatcar.org/"
BUG_REPORT_URL="https://issues.flatcar.org"
FLATCAR_BOARD="amd64-usr"
CPE_NAME="cpe:2.3:o:flatcar-linux:flatcar_linux:4102.0.0+nightly-20240923-2100-26-g730775213c:*:*:*:*:*:*:*"

root@localhost ~ # uname -a
Linux localhost 6.10.9-flatcar #1 SMP PREEMPT_DYNAMIC Thu Sep 26 06:21:38 -00 2024 x86_64 Intel(R) Xeon(R) Gold 6134 CPU @ 3.20GHz GenuineIntel GNU/Linux

root@localhost ~ # systemctl --version
systemd 256 (256.2)
+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE

root@localhost ~ # df /boot
Filesystem     1K-blocks  Used Available Use% Mounted on
/dev/vda1         129039 62922     66118  49% /boot

@ader1990 ader1990 force-pushed the ader1990/systemd-major-version-upgrade-256 branch from a3f885c to af427b9 Compare September 26, 2024 07:33
@ader1990 ader1990 marked this pull request as ready for review September 26, 2024 07:34
@ader1990
Copy link
Contributor Author

The github actions failed because of the Mantle tests using cgroupv1. See https://github.com/systemd/systemd/releases/tag/v256-rc3 -> systemd will refuse to boot in normal circumstances.

Support for cgroup v1 ('legacy' and 'hybrid' hierarchies) is now
      considered obsolete and systemd by default will refuse to boot under
      it. To forcibly reenable cgroup v1 support,
      SYSTEMD_CGROUP_ENABLE_LEGACY_FORCE=1 must be set on kernel command
      line. The meson option 'default-hierarchy=' is also deprecated, i.e.
      only cgroup v2 ('unified' hierarchy) can be selected as build-time
      default.

@jepio
Copy link
Member

jepio commented Sep 27, 2024 via email

@jepio
Copy link
Member

jepio commented Sep 27, 2024 via email

@ader1990
Copy link
Contributor Author

ader1990 commented Sep 30, 2024

Or is it worth still letting users stay with cgroups v1 if they set this flag? we would still want to validate before commiting the update.

________________________________ From: Jeremi Piotrowski @.> Sent: Thursday, September 26, 2024 8:15:27 PM To: flatcar/scripts @.>; flatcar/scripts @.> Cc: Review requested @.> Subject: Re: [flatcar/scripts] Upgrade systemd from 255 to 256 (PR #2145) We should add code to our update postinstall hook to detect if the user is still on cgroups v1 and abort the update.
________________________________ From: Adrian Vladu @.> Sent: Thursday, September 26, 2024 3:39:36 AM To: flatcar/scripts @.> Cc: Jeremi Piotrowski @.>; Review requested @.> Subject: Re: [flatcar/scripts] Upgrade systemd from 255 to 256 (PR #2145) The github actions failed because of the Mantle tests using cgroupv1. See https://github.com/systemd/systemd/releases/tag/v256-rc3 -> systemd will refuse to boot in normal circumstances. Support for cgroup v1 ('legacy' and 'hybrid' hierarchies) is now considered obsolete and systemd by default will refuse to boot under it. To forcibly reenable cgroup v1 support, SYSTEMD_CGROUP_ENABLE_LEGACY_FORCE=1 must be set on kernel command line. The meson option 'default-hierarchy=' is also deprecated, i.e. only cgroup v2 ('unified' hierarchy) can be selected as build-time default. — Reply to this email directly, view it on GitHub<#2145 (comment)>, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ABXINVQUZN6IBMQMAL3GTWLZYPP6RAVCNFSM6AAAAABLKHBRBSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGNZWGU4DGMZWHA. You are receiving this because your review was requested.Message ID: @.***>

Hello @jepio, indeed, we have just a few paths forward:

  1. follow systemd approach no questions asked related to the cgroupv1 and block the Flatcar upgrade in the postinstall hook.
    Document the transition and disseminate the information on all possible channels about it.
    Document the usage of SYSTEMD_CGROUP_ENABLE_LEGACY_FORCE=1 with ignition / manual update for the new/old instances.
  2. set SYSTEMD_CGROUP_ENABLE_LEGACY_FORCE=1 on all images (not the best), this flag might be also removed in subsequent versions
  3. patch systemd to remove the check (even less)
  4. block the systemd upgrade until another option appears. Meanwhile, announce this future upgrade path and wait for more feedback from the community.

Thanks.

@dongsupark
Copy link
Member

we have just a few paths forward:

1. follow systemd approach no questions asked related to the cgroupv1 and block the Flatcar upgrade in the postinstall hook.
   Document the transition and disseminate the information on all possible channels about it.
   Document the usage of SYSTEMD_CGROUP_ENABLE_LEGACY_FORCE=1 with ignition / manual update for the new/old instances.

2. set SYSTEMD_CGROUP_ENABLE_LEGACY_FORCE=1 on all images (not the best), this flat might be also removed in subsequent versions

3. patch systemd to remove the check (even less)

4. block the systemd upgrade until another option appears. Meanwhile, announce this future upgrade path and wait for more feedback from the community.

I am for 1, deprecate cgroup v1 and document as much as possible.
Other options look like nothing more than delaying issues.
Even if we go for 1, users still have workaround to revert the behavior.

@sayanchowdhury
Copy link
Member

  1. follow systemd approach no questions asked related to the cgroupv1 and block the Flatcar upgrade in the postinstall hook.

I agree with Dongsu to take the 1 approach, and add to the notes section in the Release notes section, and into documentation.

@krnowak
Copy link
Member

krnowak commented Oct 9, 2024

During the office hours there was an idea of adding the kernel parameter to the nodes that were still using cgroupv1. This probably would be a smooth way of updating Flatcar to a version with systemd 256.

Of course, with cgroupv1 being on chopping block, we would need to define how long we are going to support cgroupv1. My proposal would be to spin up new LTS version (which is due anyway) and announce that support for cgroupv1 in Flatcar will bound to the lifetime of the new LTS. For other channels we could say support will be until end of this year or something. For people still needing cgroupv1 after that, we could point them to use LTS, so they'll have a bit more time to move off to cgroupv2 (like until mid-2026, I think).

@t-lo
Copy link
Member

t-lo commented Oct 10, 2024

Thank you @krnowak . Summing things up:

  • LTS 2024 will remain on 255 and continue to provide cgroups-v1.
  • All releases with 265 included will not support v1 anymore.
    • Add a check to update_engine's post-install script and refuse to update (fail the update) on instances that use cgroups-v1. Bump the update engine ebuild to include the new UE version and add it to this systemd PR.
    • Fail deployments that try to enable cgroups-v1 via the command line (as per docs)
    • Add tests for the two bullet points above

@ader1990 ader1990 force-pushed the ader1990/systemd-major-version-upgrade-256 branch from af427b9 to bf245d2 Compare October 23, 2024 14:19
@ader1990 ader1990 force-pushed the ader1990/systemd-major-version-upgrade-256 branch from bf245d2 to feb5d5f Compare October 23, 2024 14:37
@ader1990 ader1990 force-pushed the ader1990/systemd-major-version-upgrade-256 branch from feb5d5f to 5bac742 Compare October 24, 2024 10:58
ader1990 added a commit to flatcar/mantle that referenced this pull request Oct 24, 2024
@ader1990
Copy link
Contributor Author

@ader1990
Copy link
Contributor Author

Mantle tests cannot be fixed via ignition setting of the Linux kernel cmdline param SYSTEMD_CGROUP_ENABLE_LEGACY_FORCE=1, as ignition starts after systemd (of course). So, the cgroup v1 mantle tests can only be done by changing the image before booting (hard) or using a specialized image (no go).

At this moment, I recomend to just skip those cgroup v1 mantle tests on version higher than this one.

@ader1990 ader1990 force-pushed the ader1990/systemd-major-version-upgrade-256 branch from bc6a2a8 to 4aa515f Compare October 28, 2024 15:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Status: Testing / in Review
Development

Successfully merging this pull request may close these issues.

[RFE] Upgrade systemd 255 to systemd 256
7 participants