-
Notifications
You must be signed in to change notification settings - Fork 1
Description
Motivation
propolis currently only runs on Linux (KVM) and macOS (Hypervisor.framework) via libkrun. There is no Windows support. This issue tracks adding a Hyper-V backend to enable running OCI container images as microVMs on Windows.
Depends on: #1 (hypervisor abstraction layer)
Background
Why Hyper-V?
- Native Windows hypervisor — ships with Windows 10/11 Pro and Server editions
- HCS (Host Compute Service) — well-maintained Go bindings via
hcsshim, used by Docker Desktop and containerd - LCOW support — HCS natively supports Linux Containers on Windows (running Linux guests on Hyper-V)
- No CGO required — hcsshim is pure Go, unlike the libkrun backend
Alternatives considered
| Alternative | Why not |
|---|---|
| Hyperlight | No OS in guest, no filesystem, no networking, no OCI support, no Go SDK. Fundamentally different execution model (function sandbox, not container VM). |
| QEMU + WHPX | Too heavyweight. Full device emulation stack adds significant complexity and binary size. |
| WSL2 + KVM | KVM support in WSL2 is unreliable — works in some WSL versions, regressed in others. Not shippable. |
| Firecracker | Linux/KVM only, no Windows support. |
Proposed Implementation
Backend: hypervisor/hyperv/
Implements the hypervisor.Backend interface from #1:
//go:build windows
type Backend struct {
// KernelPath is the path to the Linux kernel to boot.
KernelPath string
// InitRDPath is the optional path to an initrd image.
InitRDPath string
}
func (b *Backend) Name() string { return "hyperv" }
func (b *Backend) PrepareRootFS(ctx context.Context, rootfsPath string, initCfg InitConfig) (string, error) {
// 1. Write /.propolis_init.json to rootfs (generic init config)
// 2. Convert flat rootfs directory to VHDX disk image
// 3. Return path to VHDX
}
func (b *Backend) Start(ctx context.Context, cfg VMConfig) (VMHandle, error) {
// 1. Create LCOW utility VM via hcsshim
// 2. Attach root VHDX
// 3. Configure networking (hvsock transport)
// 4. Add filesystem mounts (Plan 9 shares)
// 5. Start VM
// 6. Return hcsHandle wrapping uvm.UtilityVM
}Key components
Rootfs → VHDX conversion
The OCI image extraction pipeline produces a flat directory. Hyper-V needs a VHDX disk image. Options to investigate:
virtdisk.dllWindows API (CreateVirtualDisk, AttachVirtualDisk)- hcsshim's internal LCOW layer utilities
diskpart/Convert-VHDPowerShell (fragile, last resort)
Networking: gvisor-tap-vsock over hvsock
gvisor-tap-vsock already supports Hyper-V socket transport (AF_HYPERV). The same virtual network stack used by libkrun's in-process networking can be reused with a different transport:
- libkrun:
AcceptQemu()over Unix socketpair - Hyper-V:
AcceptHyperV()over hvsock
The NetEndpoint type from #1 carries the transport info:
NetEndpoint{Type: NetEndpointHVSocket, Path: serviceGUID}Guest init
libkrun reads /.krun_config.json via its built-in init. Hyper-V needs a different mechanism:
- Write
/.propolis_init.jsonwith the same schema ({cmd, env, working_dir}) - Extend the
guest/bootpackage to read this config and exec the workload - Or use a minimal custom init binary in the initrd
Linux kernel
Unlike libkrun (which bundles a kernel in libkrunfw), Hyper-V requires a separate kernel image. Options:
- Bundle a kernel build (like Docker Desktop uses LinuxKit)
- Build a custom minimal kernel with virtio + 9p + ext4
- Let users provide their own via
Backend.KernelPath
Platform-specific files needed
| File | Purpose |
|---|---|
hypervisor/hyperv/backend.go |
Backend implementation via hcsshim |
hypervisor/hyperv/handle.go |
VMHandle wrapping uvm.UtilityVM |
hypervisor/hyperv/vhd.go |
Rootfs dir → VHDX conversion |
hypervisor/default_windows.go |
DefaultBackendName = "hyperv" |
preflight/hyperv_windows.go |
Check Hyper-V is enabled |
preflight/resources_windows.go |
Disk/CPU/RAM checks |
internal/procutil/process_windows.go |
Process identification via QueryFullProcessImageName |
extract/libname_windows.go |
.dll naming convention |
VM lifecycle
Unlike libkrun (where krun_start_enter() takes over the process and never returns), Hyper-V manages VM lifecycle via API calls:
type hcsHandle struct {
vm *uvm.UtilityVM
}
func (h *hcsHandle) Stop(ctx context.Context) error {
// Graceful: vm.Shutdown()
// Force: vm.Terminate()
// Cleanup: vm.Close()
}
func (h *hcsHandle) IsAlive() bool { return h.vm.State() == StateRunning }
func (h *hcsHandle) ID() string { return h.vm.ID() }No two-process model needed. No signals. No Setsid. Pure API-driven lifecycle.
Open Questions
- Kernel bundling: LinuxKit kernel? Custom minimal build? User-provided? What's the right default for a library?
- VHD creation: Which Windows API is most reliable for creating ext4-formatted VHDX from a directory tree?
- Guest init: Extend
guest/bootpackage, or build a separate init binary for Hyper-V guests? - CI/CD: How to test? Windows runners with nested virtualization? Mock HCS interfaces? Integration test matrix?
- hvsock vs named pipe: Which transport for gvisor-tap-vsock? hvsock is more natural for Hyper-V, named pipes are simpler.
- Hyper-V editions: Require Pro/Enterprise (full Hyper-V) or also support Home (limited WHP)?
- Filesystem mounts: Plan 9 shares via hcsshim, or alternative mechanism?
Implementation phases
- Introduce hypervisor.Backend abstraction layer #1 must land first — the abstraction layer
- Windows platform files — preflight, procutil, extract, default backend (can be done in parallel with 3)
- Hyper-V backend — hcsshim integration, VHD creation, hvsock networking
- Guest init — extend guest/boot or build custom init for non-libkrun backends
- Kernel packaging — decide on and implement kernel bundling strategy
- CI — Windows test infrastructure