-
Notifications
You must be signed in to change notification settings - Fork 419
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nested virtualization not working with VFIO devices. #6110
Comments
Notes:
|
Hmm, it looks like this should be supported. This PR added support back in 2019. I don't know if there are any integration tests. This could have regressed if it is not frequently used. |
After a little additional research, I found something interesting. cloud-hypervisor L2 guests work inside cloud-hypervisor L1 guests... but qemu L2 guests don't work inside cloud-hypervisor L1 guests. |
@thomasbarrett to confirm - |
PR to resolve this here: #6297 |
Add infrastructure to lookup the host address for mmio regions on external dma mapping requests. This specifically resolves vfio passthrough for virtio-iommu, allowing for nested virtualization to pass external devices through. Fixes cloud-hypervisor#6110 Signed-off-by: Andrew Carp <acarp@crusoeenergy.com>
Add infrastructure to lookup the host address for mmio regions on external dma mapping requests. This specifically resolves vfio passthrough for virtio-iommu, allowing for nested virtualization to pass external devices through. Fixes cloud-hypervisor#6110 Signed-off-by: Andrew Carp <acarp@crusoeenergy.com>
Add infrastructure to lookup the host address for mmio regions on external dma mapping requests. This specifically resolves vfio passthrough for virtio-iommu, allowing for nested virtualization to pass external devices through. Fixes cloud-hypervisor#6110 Signed-off-by: Andrew Carp <acarp@crusoeenergy.com> tmp
Add infrastructure to lookup the host address for mmio regions on external dma mapping requests. This specifically resolves vfio passthrough for virtio-iommu, allowing for nested virtualization to pass external devices through. Fixes cloud-hypervisor#6110 Signed-off-by: Andrew Carp <acarp@crusoeenergy.com>
Add infrastructure to lookup the host address for mmio regions on external dma mapping requests. This specifically resolves vfio passthrough for virtio-iommu, allowing for nested virtualization to pass external devices through. Fixes cloud-hypervisor#6110 Signed-off-by: Andrew Carp <acarp@crusoeenergy.com>
Add infrastructure to lookup the host address for mmio regions on external dma mapping requests. This specifically resolves vfio passthrough for virtio-iommu, allowing for nested virtualization to pass external devices through. Fixes cloud-hypervisor#6110 Signed-off-by: Andrew Carp <acarp@crusoeenergy.com>
Add infrastructure to lookup the host address for mmio regions on external dma mapping requests. This specifically resolves vfio passthrough for virtio-iommu, allowing for nested virtualization to pass external devices through. Fixes cloud-hypervisor#6110 Signed-off-by: Andrew Carp <acarp@crusoeenergy.com>
Add infrastructure to lookup the host address for mmio regions on external dma mapping requests. This specifically resolves vfio passthrough for virtio-iommu, allowing for nested virtualization to pass external devices through. Fixes cloud-hypervisor#6110 Signed-off-by: Andrew Carp <acarp@crusoeenergy.com>
Add infrastructure to lookup the host address for mmio regions on external dma mapping requests. This specifically resolves vfio passthrough for virtio-iommu, allowing for nested virtualization to pass external devices through. Fixes cloud-hypervisor#6110 Signed-off-by: Andrew Carp <acarp@crusoeenergy.com>
Add infrastructure to lookup the host address for mmio regions on external dma mapping requests. This specifically resolves vfio passthrough for virtio-iommu, allowing for nested virtualization to pass external devices through. Fixes cloud-hypervisor#6110 Signed-off-by: Andrew Carp <acarp@crusoeenergy.com>
Add infrastructure to lookup the host address for mmio regions on external dma mapping requests. This specifically resolves vfio passthrough for virtio-iommu, allowing for nested virtualization to pass external devices through. Fixes cloud-hypervisor#6110 Signed-off-by: Andrew Carp <acarp@crusoeenergy.com>
Add infrastructure to lookup the host address for mmio regions on external dma mapping requests. This specifically resolves vfio passthrough for virtio-iommu, allowing for nested virtualization to pass external devices through. Fixes cloud-hypervisor#6110 Signed-off-by: Andrew Carp <acarp@crusoeenergy.com>
Add infrastructure to lookup the host address for mmio regions on external dma mapping requests. This specifically resolves vfio passthrough for virtio-iommu, allowing for nested virtualization to pass external devices through. Fixes cloud-hypervisor#6110 Signed-off-by: Andrew Carp <acarp@crusoeenergy.com>
To followup for clarity, the actual behavior of this bug was: Cloud hypervisor L1 + cloud hypervisor L2 (VFIO) -> fails Both CH and qemu worked when no external devices were passed through over vfio. However, when devices were passed through over vfio, the L1 hypervisor would crash. This turned out to be a bug with cloud hypervisor's implementation of external dma, where it was unable to map mmio regions. |
Add infrastructure to lookup the host address for mmio regions on external dma mapping requests. This specifically resolves vfio passthrough for virtio-iommu, allowing for nested virtualization to pass external devices through. Fixes cloud-hypervisor#6110 Signed-off-by: Andrew Carp <acarp@crusoeenergy.com>
Do we have a test that exercises this - should we have one? |
Add infrastructure to lookup the host address for mmio regions on external dma mapping requests. This specifically resolves vfio passthrough for virtio-iommu, allowing for nested virtualization to pass external devices through. Fixes #6110 Signed-off-by: Andrew Carp <acarp@crusoeenergy.com>
Describe the bug
Nested virtualization does not seem to be working correctly with VFIO devices. When creating a L1 cloud-hypervisor VM with a VFIO passthrough device and a L2 qemu VM with a VFIO passthrough device, the L1 VM exited with an error originating in the VfioDmaMapping::map function. Specifically, the problem seems to be with mapping the VFIO bars into the L2 VM memory space. See error log below.
To Reproduce
Start L1 virtual-machine.
Start L2 virtual machine.
Guest OS version details:
Ubuntu 22.04
Host OS version details:
Ubuntu 22,04
Logs
Note that
0xe000180000
is the address of the PCI BAR of the VFIO passthrough device in the L1 guest.Linux kernel output:
The text was updated successfully, but these errors were encountered: