-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Design review of integrated-gpu-passthrough v1 #33
Comments
(@johnelse: this is the question I posted on the original pull request:) Is the "enablement" of integrated GPUs really a host level thing, or is this a simplification? Would it not be per GPU? I guess I am wondering whether it is absolutely necessary to special case integrated GPUs, because as a user, I'd just see them as just another GPU. What does /dev/vga_arbiter do? What is the reason that integrated GPUs are treated differently from "normal" GPUs? A normal GPU card can be given to dom0 as well as a VM, without the need for a reboot. Why would it not be possible to have integrated passthrough "enabled" for VMs as well as for dom0 without a reboot in between? Is it all because the integrated ones are not PCI devices? I'm asking all these questions just to be sure we cannot further simplify the user interface :) |
The integrated GPUs are still PCI devices, but by default they will be the host's primary graphics device and as such dom0 and even xen ifself may try to use them. The only way to prevent this is to modify the xen commandline and reboot. If this is done then we would indeed be able to display an integrated GPU to the user as a normal PGPU. n.b. we can't boot this way automatically, as a user may actually want to use the GPU for the dom0 console. vga_arbiter is documented here - as far as xapi is concerned, it's used to determine which GPU is dom0's primary graphics device. Normally we don't create a PGPU for the GPU reported by vga_arbiter, but if we've modified the xen commandline as above we can be sure that dom0 won't be using this GPU, so we can go ahead and create a PGPU. As for whether this is a GPU-level or a host-level thing...instead of the suggested host field, we could instead have two lists of PCI devices called something like "host.GPUs_hidden_from_dom0" and "host.GPUs_hidden_from_dom0_on_reboot". I can't at the moment think of any reason to have anything other that zero or one GPU in either list, but this be would more futureproof. |
Design revised after discussion: #66 |
No description provided.
The text was updated successfully, but these errors were encountered: