-
Notifications
You must be signed in to change notification settings - Fork 419
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
virtio-device: Mapping mmio over virtio-iommu #6297
virtio-device: Mapping mmio over virtio-iommu #6297
Conversation
49d5bba
to
5622317
Compare
@acarp-crusoe If a commit fixes a bug please can you put "Fixes: #" in the commit message itself - if it's just relevant "See: #" |
5622317
to
19dfd6e
Compare
Commit message has been updated to include the fixed bug - thanks |
19dfd6e
to
fe782d7
Compare
From what I can see, the existing workflow also appears to be failing on the main branch so there's not much I can do to address that. Everything else is ready to go though |
@thomasbarrett would you like to review? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@acarp-crusoe This PR also has some extra whitespace changes - do you mind taking a look?
59afa8c
to
87a63fb
Compare
Sorry about that, whitespace changes should be reverted |
It seems like we are finding ourselves needing to re-implementing a lot of the functionality from the |
Sorry for the slow review @rbradford. @acarp-crusoe and I were at a conference :) |
07642e7
to
4dac0e0
Compare
4dac0e0
to
c07eeb9
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the contribution. I'd suggest to split this change into two commits. The first commit is refactoring (e.g. no functional change) of moving VfioDmaMapping
to the pci
crate. And the second commit (the actual functional change) is extending the VfioDmamapping
struct to support lookup the user address of mmio regions. Separating functional changes from refactoring is generally cleaner and can help a lot with code review.
Also, I wonder why a Cloud Hypervisor L2 VM works while a QEMU L2 VM does not work? Parden me that I have limited understanding of VFIO and vMMIO.
9c1d8fd
to
ec4b9fe
Compare
ec4b9fe
to
86033ce
Compare
Sounds good, the change has just been split into two commits (one the refactor and the other functional), just let me know if you'd need anything else. And to answer your question - both Cloud Hypervisor and Qemu worked as L2 VMs within Cloud Hypervisor. However, if you tried to include any external pci devices as part of the L2 VM, that wouldn't work. The issue lied in the fact that the mapping requests that arrived in the virtio iommu device did not check against mmio regions at all within VfioDmaMapping. Originally VfioDmaMapping just attempted to dma map RAM using the guest_memory mmap stored in the memory_manager, but mmio regions are not included in the guest_memory mmap. This PR adds mmio_regions as a separate object in the device_manager so the VfioDmaMapping is able to check both guest memory (RAM) as well as mmio. With this change, we can now map mmio over virtio iommu properly, whereas before the L1 VM would crash because it receiving a mapping request for a band of mmio memory it couldn't locate. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code changes make sense and look good to me. As I am not an VFIO expert, another review will be very helpful. @rbradford @thomasbarrett
This makes sense if both Cloud Hypervisor and QEMU L2 VM with VFIO devices are failing. I thought the observation was that L2 Cloud Hypervisor with VFIO works according to this comment: #6110 (comment). Thanks for clarifying that. |
dcb3f82
to
f9bad5e
Compare
No problem, and that previous comment you've linked was when I was trying to get an understanding of the bug that @thomasbarrett produced, so it's actually a little off. I added a followup comment to clarify the behavior of the bug now that we have a good understanding. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Few more comments below. Also, it is good to use verb to start the commit message. Say:
virtio-devices: Move VfioDmaMapping to be in the pci crate
virtio-devices: Map mmio over virtio-iommu
VfioUserDmaMapping is already in the pci crate, this moves VfioDmaMapping to match the behavior. This is a necessary change to allow the VfioDmaMapping trait to have access to MmioRegion memory without creating a circular dependency. The VfioDmaMapping trait needs to have access to mmio regions to map external devices over mmio (a follow-up commit). Signed-off-by: Andrew Carp <acarp@crusoeenergy.com>
Add infrastructure to lookup the host address for mmio regions on external dma mapping requests. This specifically resolves vfio passthrough for virtio-iommu, allowing for nested virtualization to pass external devices through. Fixes cloud-hypervisor#6110 Signed-off-by: Andrew Carp <acarp@crusoeenergy.com>
f9bad5e
to
c04ff32
Compare
The PR looks good to me. @thomasbarrett @rbradford And additional comments? |
No additional comments. Looks good to me :) |
Add infrastructure to lookup the host address for mmio regions on external dma mapping requests. This specifically resolves vfio passthrough for virtio-iommu, allowing for nested virtualization to pass external devices through.
Fixes #6110