bzImages are a common format for Linux kernels on the x86_64 platform. They are also readily available in pre-compiled form via Linux distributions. If CHV would be able to boot bzImages directly, it would reduce friction for users to onboard onto CHV.
The underlying assumption here is that for some users its not feasible to use a specific Linux version or configuration. Or they need to build Linux kernels that work for a variety of usecases. It is also undesirable for users to build a disk image just to boot a Linux kernel/initrd pair that they already have in their hands. ("This was easy in Qemu and now its hard.")
Benefits of bzImage Support
Most (pretty much all?) x86_64 Linux kernels out there will just work with Cloud Hypervisor. Users can follow the usual Linux documentation/wikis/stackoverflow when building a kernel and have a very high likelihood that they end up with something they can use with CHV.
Implementation Options
CHV Native (Preferred)
We can utilize the linux-loader crate (which already has its bzImage feature flag enabled) to load the kernel directly in CHV. This is similar in complexity and can share some code with the PVH loading.
For the user this means that --kernel just accepts PVH or bzImage formats transparently and no firmware is required.
This would be my preferred option, because in my opinion it has the lowest complexity both for users and developers.
Via rust-hypervisor-firmware
We can extend rust-hypervisor-firmware to fetch kernels from CHV. In qemu, this is accomplished via fw-cfg.
For the user this means that --firmware firmware.img --kernel bzImage would just work.
This is a bit more out there and requires more fw-cfg and other work, but has some really nice advantages. Coreboot takes care of lots of functionality that is currently implemented in CHV:
- PCI BAR MMIO region allocation
- various tables that the OS needs (ACPI, MPTable, SMBIOS, etc.), if you want it
- pluggable support for different bootloaders depending on the situation (from full UEFI to something minimal)
In effect this moves code from CHV, where it is security critical, inside the VM, where it is not security critical. (Think exploitable bug in bzImage loading or similar.)
Having fw-cfg would also allow moving MPTable and SMBIOS code into rust-hypervisor-firmware, where it can do less harm.
For the user this is identical to the previous option: --firmware firmware.img --kernel bzImage would just work (with maybe a different firmware).
This is the most involved option, but has the biggest impact long-term and also paves the way to proper Windows support.
Alternatives to this Proposal
Instead of all of this we can also write clear documentation that gets people started when the assumptions I stated at the beginning apply to them.
See also #5752 for more context.
bzImages are a common format for Linux kernels on the x86_64 platform. They are also readily available in pre-compiled form via Linux distributions. If CHV would be able to boot bzImages directly, it would reduce friction for users to onboard onto CHV.
The underlying assumption here is that for some users its not feasible to use a specific Linux version or configuration. Or they need to build Linux kernels that work for a variety of usecases. It is also undesirable for users to build a disk image just to boot a Linux kernel/initrd pair that they already have in their hands. ("This was easy in Qemu and now its hard.")
Benefits of bzImage Support
Most (pretty much all?) x86_64 Linux kernels out there will just work with Cloud Hypervisor. Users can follow the usual Linux documentation/wikis/stackoverflow when building a kernel and have a very high likelihood that they end up with something they can use with CHV.
Implementation Options
CHV Native (Preferred)
We can utilize the linux-loader crate (which already has its bzImage feature flag enabled) to load the kernel directly in CHV. This is similar in complexity and can share some code with the PVH loading.
For the user this means that
--kerneljust accepts PVH or bzImage formats transparently and no firmware is required.This would be my preferred option, because in my opinion it has the lowest complexity both for users and developers.
Via rust-hypervisor-firmware
We can extend rust-hypervisor-firmware to fetch kernels from CHV. In qemu, this is accomplished via fw-cfg.
For the user this means that
--firmware firmware.img --kernel bzImagewould just work.Via coreboot + coreboot-linux-loader payload
This is a bit more out there and requires more fw-cfg and other work, but has some really nice advantages. Coreboot takes care of lots of functionality that is currently implemented in CHV:
In effect this moves code from CHV, where it is security critical, inside the VM, where it is not security critical. (Think exploitable bug in bzImage loading or similar.)
Having fw-cfg would also allow moving MPTable and SMBIOS code into rust-hypervisor-firmware, where it can do less harm.
For the user this is identical to the previous option:
--firmware firmware.img --kernel bzImagewould just work (with maybe a different firmware).This is the most involved option, but has the biggest impact long-term and also paves the way to proper Windows support.
Alternatives to this Proposal
Instead of all of this we can also write clear documentation that gets people started when the assumptions I stated at the beginning apply to them.
See also #5752 for more context.