Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Very hacky solution for Windows guest #2

Open
arne-claeys opened this Issue Mar 23, 2018 · 135 comments

Comments

Projects
None yet
@arne-claeys
Copy link

arne-claeys commented Mar 23, 2018

Dear Mr Coulter

First of all, thanks a lot for your research.
In the meantime I have managed to get GPU passthrough (of my muxless NVIDIA GTX 950M) on a Windows guest working as well.
At the moment the solution is very hacky, but perhaps it could be useful.

To this end I have hard coded the VROM dump in the OVMF image by patching OvmfPkg/AcpiPlatformDxe/QemuFwCfgAcpi.c
fwcfg_patch.txt

The VROM is read from a header file and copied to a RuntimePool to make sure it remains persistent after the OS has loaded.
In the following part of the code a new ACPI table is defined that first defines an OperationRegion that points to the VROM image.
At the end a slightly modified version of your ACPI table, in which I pasted a decompiled version of the ROM call from the SSDT of my laptop, is appended to the rest of the table.
The RVBS variable should be set to the actual length of the VROM dump.

ssdt.asl

As currently I don't have sufficient time to figure out a more elegant solution, the following was done to compile this table.

  • force compiling the table using iasl -f ssdt.asl
    The force option is necessary as the OperationRegion is not included in the ASL, but in the preceding part of the ACPI table that was defined in QemuFwCfgAcpi.c.
  • using the following script to drop the header of the table and create a hex dump in vrom_table.h
    buildtable.txt

In my case this made the Error 43 finally disappear.
I hope this could be of any help.

Kind regards
Arne

@jscinoz

This comment has been minimized.

Copy link
Owner

jscinoz commented Mar 24, 2018

Hi Arne,

Thank you for this! I didn't realise anyone else was still looking into this. I've not had too much time to do so myself lately sadly. I'm glad to hear this got it working for you.

To try and figure out what's going on here, could you please let me know the following about your setup?

  • Were you also assigning a GVT card to the guest as primary VGA, or was the 950M the primary card?
  • Your qemu command line / libvirt XML

Also, can you confirm that loading the same VROM via the original ASL in this repository (without your OVMF patch) did not work on your hardware? I may well be missing something as this whole area is quite new to me, but I'd have expected them to have the same result, as the interface to the Nvidia driver itself (the _ROM method) remains the same in either case.

Cheers,
Jack

@arne-claeys

This comment has been minimized.

Copy link
Author

arne-claeys commented Mar 24, 2018

Hi Jack
Attached you can find the libvirt XML that was used.
win10-pci.txt

A Virtio GPU was assigned as the primary graphics adapter for my guest.
The NVIDIA card was assigned as the guest's secondary graphics adapter.
As in the Misairu tutorial, the guest was first configured using a Spice client and afterwards accessed using RemoteFX.

I can confirm Error 43 still occurred when I tried using the original ASL table and passed the VROM as a PCI ROMBAR.
However it has been a while since I tried this out.
As I start doubting whether I have changed the filename in this critical line of the ACPI table, I can't exclude it would have worked in an easier way.
Local1 = FWGS(Local0, "genroms/10de:139b:4136:1764")
I will check this later on.

As you wrote in this post that Windows clears the ROMBAR image once booted, I quickly switched to the RuntimePool approach.

Kind regards
Arne

@jscinoz

This comment has been minimized.

Copy link
Owner

jscinoz commented Mar 25, 2018

Thanks for the information. It's interesting that it worked for you without a GVT card. I will have to try that scenario again myself, with a fresh VM in case perhaps there is something broken in the one I've used so far.

The filename in the FWGS call is simply whatever filename the ROM ends up as in fw_cfg. I named it according to hardware IDs in my case (vendor, device, subsystem) as I intended to eventually make the ASL generic and to just read the PCI IDs from the device at PCI address 1:0:0 and load the appropriate image.

I'll give things a try on my machine with your method when I have a bit of free time and will reply back here with the result.

@jscinoz

This comment has been minimized.

Copy link
Owner

jscinoz commented Apr 2, 2018

Myself and a few others have had a chance to test your patch, and I can confirm it works as far as getting further than the code 43 error :)

Unfortunately, none of us have had any luck getting 3D workloads going in the guest - were you able to do so in your setup?

@jscinoz

This comment has been minimized.

Copy link
Owner

jscinoz commented Apr 2, 2018

After further testing, I can confirm 3D workloads do in fact work. What currently doesn't work (and I suspect this is the same with any RemoteFX-based setup), is fullscreen mode. I suspect we might need to emulate an Optimus setup in the VM with GVT for this to work, but thus far I haven't been able to get GVT itself to work (even without a Nvidia card involved)

@arne-claeys

This comment has been minimized.

Copy link
Author

arne-claeys commented Apr 2, 2018

Nice to hear the patch helped you to finally get rid of code 43 :-)
So I can conclude that 3D workloads work for you, unless you run your RDP client in full screen mode?
At first sight, I find it difficult to imagine why that makes a difference.
Hopefully there will be a way to solve this issue without the need to emulate Optimus with GVT in the VM.
Solving the error 12 here (What about GVT-g?) doesn't really sound promising.

In my setup some simple 3D rendering tasks seemed to run on the GPU, but I did not test this in detail and never in full screen mode.

It will also take a while before I can try out something new as my own laptop is currently sent back to the manufacturer for repair.

@jscinoz

This comment has been minimized.

Copy link
Owner

jscinoz commented Apr 3, 2018

So I can conclude that 3D workloads work for you, unless you run your RDP client in full screen mode?

Not quite. To clarify, it has nothing to do with whether or not the RDP client is fullscreen, but rather, whether the application (within the VM) itself runs in fullscreen. There are a few ways to reproduce this:

  1. Try running a game that defaults to fullscreen (true fullscreen, not borderless windowed). It will likely crash on startup
  2. As above, but with a benchmark; 3DMark is an example of this; it will throw an error relating to enumerating display resolutions (I don't remember the exact name of the throwing method but it was along the lines of ListAllModes)
  3. As an example of how non-fullscreen applications work, try the Unigine Heaven benchmark - it will work fine in windowed mode, but will be unable to enter fullscreen mode.

Hopefully there will be a way to solve this issue without the need to emulate Optimus with GVT in the VM.
Solving the error 12 here (What about GVT-g?) doesn't really sound promising.

I do not get the Code 12 error - I suspect @Misairu-G had something else broken in their setup. I can get GVT working (and even run 3D workloads on the GVT card) if a QXL card remains the primary VGA in the VM.

What I have not been able to get working is GVT as primary VGA in the guest. There's ongoing work by Intel on this (specifically GVT dmabuf and x-display support), but it is still quite raw. Judging by this document, having GVT working as primary VGA will be necessary to trigger the hybrid-graphics behaviour in the Windows graphics stack.

@arne-claeys

This comment has been minimized.

Copy link
Author

arne-claeys commented Apr 3, 2018

Thanks for the explanation. It gives me a better understanding of the problem now.

@jscinoz

This comment has been minimized.

Copy link
Owner

jscinoz commented Apr 9, 2018

After a bit of testing, I've found the following things:

  • Some games/engines are clever enough to make use of the Nvidia card when it is not primary VGA, even without a valid hybrid graphics setup. Fortnite is the only such game I've found that works in this configuration, but I imagine the same would occur with any UE4 game. Even when QXL is the VM's primary VGA, it successfully renders on the Nvidia card and draws to the QXL display with framerates comparable of bare-metal performance (both 90-100fps). This seems to be the exception, not the rule; all other tested games (Overwatch, Planetside 2) and benchmarks (3DMark, Unigine Heaven) run with software rendering only in this configuration.
  • GVT-g local display DMA-BUF support currently only works with SeaBIOS based VMs, and even then, seems incredibly flakey - guest BSODs are frequent, and even when the guest doesn't crash outright, there is significant graphical corruption in the guest.

Going forward, I think this leaves us with a few options:

  • Wait for GVT-g to support OVMF (intel/gvt-linux#23) and see if that then allows for a valid hybrid graphics setup in the VM.
  • Make similar changes to SeaBIOS to support loading the Nvidia VBIOS, and see if this results in a valid hybrid graphics setup. There are questions as to what the impact of the noted graphical corruption would be.
  • Modify qxl-wddm-dod to support the additional capabilities required for it to be a valid participant in a hybrid graphics setup - this might be the best option (if it's actually technically workable), as it would avoid quite a bit of complexity inherent to GVT. It is unknown whether the Nvidia driver would cooperate in such a setup, but as far as my limited understanding of the WDDM hybrid graphics model goes, it should work.
@jscinoz

This comment has been minimized.

Copy link
Owner

jscinoz commented Apr 11, 2018

For anyone else looking at this, an updated OVMF patch generated against current OVMF git master is here

@jscinoz

This comment has been minimized.

Copy link
Owner

jscinoz commented May 21, 2018

After a bit of experimentation, and a patch from upstream OVMF, I got GVT-g local display support working on my machine. Unfortunately, this does not result in a valid hybrid graphics setup, as the emulated display is a regular DisplayPort device, and as per Microsoft documentation, the iGPU needs to expose an embedded display panel of some kind.

At this point, there are two options to potentially get this working, but both are beyond my current knowledge/expertise, and I sadly don't have much free time to get up to speed in these areas:

  • Modify qxl-wddm-dod to support WDDM 1.3, with the additional constraint that it must expose the emulated display as an embedded display (i.e. DXGKDDI_QUERY_CHILD_RELATIONS should return children of type _DXGK_CHILD_DEVICE_TYPE.TypeIntegratedDisplay. This is probably the preferable option if it works, as we can avoid the complexity of GVT-g.
  • Make similar modifactions to GVT-g. I'm unsure as to whether these would require modifying the closed-source Intel Windows driver (i.e. something we can't do), or if it could be done entirely in the vgpu code in the host kernel.
@marcosscriven

This comment has been minimized.

Copy link

marcosscriven commented Jun 22, 2018

@jscinoz @arne-claeys - just trying to investigate whether this would allow gaming in a windows guest on a linux host?

I have a a Dell Precision 5520 via work, which has a Quadro M1200. Like the XPS’s, I believe this is a muxless setup, and appears as a 3D controller.

I see you mentioning ‘rendering workloads’, and indeed games based in Unreal engine, but still unclear on current state, or what the potential is here on a laptop with this setup?

@marcosscriven

This comment has been minimized.

Copy link

marcosscriven commented Jun 23, 2018

I found a good guide to the current status here: https://www.reddit.com/r/VFIO/comments/8gv60l/current_state_of_optimus_muxless_laptop_gpu/

Appears to mention @jscinoz’s work.

@Ashymad

This comment has been minimized.

Copy link

Ashymad commented Jul 4, 2018

Sadly I didn't have any luck with getting this to work. I did however create a PKGBUILD that complies OVMF with the vBIOS patched in for people that want to test it out quickly (and are running Arch Linux). Just place your rom in the same folder, name in vBIOS.bin, and run makepkg -si.
EDIT: After copying much of Arne's libvirt XML I was finally able to say goodbye to Code 43 :)

@marcosscriven

This comment has been minimized.

Copy link

marcosscriven commented Jul 4, 2018

@Ashymad - any ideas how to get the VBIOS for something like the Dell XPS or Precision 5520?

@pseudolobster

This comment has been minimized.

Copy link

pseudolobster commented Jul 14, 2018

@marcosscriven I'd imagine the VBIOS is included in the system BIOS, so you will not be able to use tools which try to dump the VBIOS from the PCIe bus like you'd do for a discrete card.

The easiest way is probably to try booting up windows on bare metal, then grab the vbios from the registry. I found a guide on how to do this here: https://forums.laptopvideo2go.com/topic/32103-how-to-grab-a-notebooks-vbios-that-is-not-supported-by-nvflash/

Another way would be to decompile your system BIOS and grab the VBIOS rom out of that.

On a HP, I was able to go to support.hp.com, search my model, download the BIOS update, run it, but don't actually go through with flashing your BIOS. Just allow it to unpack, then look in C:\windows\temp or %appdata% to see where it put everything. Some installers you may be able to unpack with 7zip.

Once you have the system BIOS, you'll need to find a copy of Phoenix BIOS Editor, or some similar tool to decompile the UEFI image into its individual firmware blobs. This gave me a bunch of files with names like 4A640366-5A1D-11E2-8442-47426188709B_1693_updGOP.ROM. From there I was able to grep these ROM files for "Nvidia", and I found a copy of my VBIOS that way.

@marcosscriven

This comment has been minimized.

Copy link

marcosscriven commented Jul 20, 2018

Thanks so much @pseudolobster - extracting via the linked how-to worked a treat on the Dell Precision 5520.

In case that link disappears in future the basic overview is:

  • Extract [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4d36e968-e325-11ce-bfc1-08002be10318}\0002\Session] with regedit to a file (the 0002 might be different).
  • Cut out everything except the hex data
  • Import that with a hex tool that understands how to turn bytes encoded as XX strings to raw binary data.
@marcosscriven

This comment has been minimized.

Copy link

marcosscriven commented Jul 20, 2018

@pseudolobster @arne-claeys @jscinoz

I extracted the bios from windows reg, but it seems to be of type x86 PC-AT rather than UEFI:

	PCIR: type 0 (x86 PC-AT), vendor: 10de, device: 13b6, class: 030200
	PCIR: revision 3, vendor revision: 1
	Last image

According to https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF that means this won't work with passthrough.

Do you know a way around that at all please?

@Ashymad

This comment has been minimized.

Copy link

Ashymad commented Jul 21, 2018

@hardeepd

This comment has been minimized.

Copy link

hardeepd commented Jul 24, 2018

@arne-claeys @jscinoz Thank you both very much for your work here.
I've tried your patch and I still get a Code 43 in windows.
I thought I'd try to debug the firmware in a linux VM and can see that the nouveau driver fails to find any vBios.

nouveau: bios: unable to locate usable image
nouveau: bios: ctor failed, -22

Any ideas how to resolve this or where I should be looking?

Is there a way to verify that the OVMF firmware I've compiled does in fact have the vBIOS embedded?

Edit: I fixed it! Seems that the firmware was fine all along but there was an address problem in the ioh3420 configuration of my qemu script

@marcosscriven

This comment has been minimized.

Copy link

marcosscriven commented Aug 3, 2018

@arne-claeys @jscinoz

I created a patched OVMF for my Nvidia Quadro M1200 (per https://github.com/marcosscriven/ovmf-with-vbios-patch)

However, I still get error 43. I see this error in the qemu logs:

2018-08-03T12:45:56.397289Z qemu-system-x86_64: vfio-pci: Cannot read device rom at 0000:01:00.0

I've ensured those patched versions are in use, and KVM is hidden etc.

<domain type='kvm'>
  <name>win10-2</name>
  <uuid>e7d44285-507b-48da-bfe2-2eba415016bd</uuid>
  <memory unit='KiB'>8388608</memory>
  <currentMemory unit='KiB'>8388608</currentMemory>
  <vcpu placement='static'>4</vcpu>
  <os>
    <type arch='x86_64' machine='pc-q35-2.11'>hvm</type>
    <loader readonly='yes' type='pflash'>/edk2/Build/OvmfX64/RELEASE_GCC5/FV/OVMF_CODE.fd</loader>
    <nvram>/edk2/Build/OvmfX64/RELEASE_GCC5/FV/OVMF_VARS.fd</nvram>
    <boot dev='hd'/>
    <smbios mode='host'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
      <vendor_id state='on' value='5DIE45JG7EAY'/>
    </hyperv>
    <kvm>
      <hidden state='on'/>
    </kvm>
    <vmport state='off'/>
  </features>

I've also ensure the device is passed through with:

    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
      </source>
      <rom bar='off'/>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </hostdev>

I tried both with and without the <rom bar> tag.

IOMMUS looks to be all setup ok:

IOMMU Group 1 00:01.0 PCI bridge [0604]: Intel Corporation Skylake PCIe Controller (x16) [8086:1901] (rev 05)
IOMMU Group 1 01:00.0 3D controller [0302]: NVIDIA Corporation GM107GLM [Quadro M1200 Mobile] [10de:13b6] (rev a2)

dmesg show the vfio_pci added:

dmesg | grep -i vfio
[    2.358815] VFIO - User Level meta-driver version: 0.3
[    2.380410] vfio_pci: add [10de:13b6[ffff:ffff]] class 0x000000/00000000
[  184.054104] vfio-pci 0000:01:00.0: enabling device (0002 -> 0003)

And finally lspci shows the card is bound to vfio-pci driver:

lspci -nnk -d 10de:13b6                         
01:00.0 3D controller [0302]: NVIDIA Corporation GM107GLM [Quadro M1200 Mobile] [10de:13b6] (rev a2)
	Subsystem: Dell GM107GLM [Quadro M1200 Mobile] [1028:07bf]
	Kernel driver in use: vfio-pci
	Kernel modules: nvidiafb, nouveau

Any ideas please?

@marcosscriven

This comment has been minimized.

Copy link

marcosscriven commented Aug 5, 2018

@hardeepd - can you share how you worked out the ioh3420 settings and your xml confit please? I’ve posted my own PCI tree above.

@marcosscriven

This comment has been minimized.

Copy link

marcosscriven commented Aug 7, 2018

For reference I did finally get this working https://github.com/marcosscriven/ovmf-with-vbios-patch/blob/master/qemu/win-hybrid.xml

The tricky thing is if the GPU is attached via a bridge, you need to specify that connection:

  <qemu:commandline>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.hostdev0.x-pci-sub-vendor-id=4136'/>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.hostdev0.x-pci-sub-device-id=1983'/>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.hostdev0.bus=pci.1'/>
  </qemu:commandline>
@kalelkenobi

This comment has been minimized.

Copy link

kalelkenobi commented Aug 9, 2018

Hey @marcosscriven, I think I'm experiencing a similar problem. I successfully passed my dGPU to the Windows 10 x64 Guest, using @Ashymad's PKGBUILD to patch the OVMF with my vBIOS. That got me to the point were I was able to install NVIDIA drivers, but after that I'm stuck with code 43. Could you please post your entire xml? the link above did not work for me (404). Thank you very much.

@marcosscriven

This comment has been minimized.

Copy link

marcosscriven commented Aug 9, 2018

All my config for this is in the same linked repo https://github.com/marcosscriven/ovmf-with-vbios-patch

@kalelkenobi

This comment has been minimized.

Copy link

kalelkenobi commented Aug 9, 2018

Sadly I had no luck, so I turn to you guys :). I'm trying to do this with my MSI GS63VR 6RF, it should be a muxless laptop with a GTX1060 dGPU. What's interesting is that the dGPU should be directly connected to the HDMI output, so I was hoping to pass the 1060 to a Win10 guest and use an external monitor connected to the HDMI (don't know if that's possible).
I'm on ArchLinux using qemu-headless 2.12.1 and libvirt 4.5.0.

The relevant IOMMU groups are:

IOMMU Group 1 00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x16) [8086:1901] (rev 07)
IOMMU Group 1 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP106M [GeForce GTX 1060 Mobile] [10de:1c20] (rev a1)

and also here's my full libvirt xml:

<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  <name>windows10</name>
  <uuid>da3372e1-96a4-4470-8131-6079e178c609</uuid>
  <memory unit='KiB'>15624192</memory>
  <currentMemory unit='KiB'>15624192</currentMemory>
  <vcpu placement='static'>4</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='4'/>
    <vcpupin vcpu='1' cpuset='5'/>
    <vcpupin vcpu='2' cpuset='6'/>
    <vcpupin vcpu='3' cpuset='7'/>
  </cputune>
  <os>
    <type arch='x86_64' machine='pc-q35-2.12'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/ovmf/x64/OVMF_CODE.fd</loader>
    <nvram>/var/lib/libvirt/qemu/nvram/windows10_VARS.fd</nvram>
    <bootmenu enable='yes'/>
    <smbios mode='host'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
      <vendor_id state='on' value='5DIE45JG7EAY'/>
    </hyperv>
    <kvm>
      <hidden state='on'/>
    </kvm>
    <vmport state='off'/>
  </features>
  <cpu mode='custom' match='exact' check='none'>
    <model fallback='allow'>Skylake-Client</model>
    <topology sockets='1' cores='4' threads='1'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
    <timer name='hypervclock' present='yes'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <pm>
    <suspend-to-mem enabled='no'/>
    <suspend-to-disk enabled='no'/>
  </pm>
  <devices>
    <emulator>/usr/bin/qemu-system-x86_64</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <source file='/home/kalel/workspace/VirtualMachines/windows10.img'/>
      <target dev='vda' bus='virtio'/>
      <boot order='3'/>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </disk>
    <controller type='usb' index='0' model='nec-xhci'>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </controller>
    <controller type='pci' index='0' model='pcie-root'/>
    <controller type='pci' index='1' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='1' port='0x10'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/>
    </controller>
    <controller type='pci' index='2' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='2' port='0x11'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='pci' index='3' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='3' port='0x12'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='4' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='4' port='0x13'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/>
    </controller>
    <controller type='pci' index='5' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='5' port='0x14'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/>
    </controller>
    <controller type='pci' index='6' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='6' port='0x15'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/>
    </controller>
    <controller type='pci' index='7' model='dmi-to-pci-bridge'>
      <model name='i82801b11-bridge'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/>
    </controller>
    <controller type='pci' index='8' model='pci-bridge'>
      <model name='pci-bridge'/>
      <target chassisNr='8'/>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </controller>
    <controller type='sata' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
    </controller>
    <interface type='network'>
      <mac address='52:54:00:c4:cb:d0'/>
      <source network='default'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0' multifunction='on'/>
    </interface>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='spice' autoport='yes'>
      <listen type='address'/>
      <gl enable='no' rendernode='/dev/dri/by-path/pci-0000:00:02.0-render'/>
    </graphics>
    <sound model='ich6'>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </sound>
    <video>
      <model type='virtio' heads='1' primary='yes'>
        <acceleration accel3d='no'/>
      </model>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x1'/>
    </video>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
      </source>
      <rom bar='off'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0' multifunction='on'/>
    </hostdev>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
    </memballoon>
  </devices>
  <qemu:commandline>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.hostdev0.x-pci-sub-vendor-id=5218'/>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.hostdev0.x-pci-sub-device-id=4525'/>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.hostdev0.bus=pci.1'/>
    <qemu:env name='QEMU_AUDIO_DRV' value='pa'/>
    <qemu:env name='QEMU_PA_SAMPLES' value='4096'/>
    <qemu:env name='QEMU_AUDIO_TIMER_PERIOD' value='200'/>
    <qemu:env name='QEMU_PA_SERVER' value='/run/user/1000/pulse/native'/>
  </qemu:commandline>
</domain>

At this point I've tried a lot of different things: patching NVIDIA drivers, removing the VIRTIO primary GPU, booting with the external monitor plugged in, but I'm still getting code 43 after I install NVIDIA drivers. I've also checked the vBIOS and it seems that the one I used to patch OVMF is the right one, because it's an exact match to the one I extracted with nvflash from inside the VM.
I'm probably missing something stupid at this point could you guys please help?

Thank you all for your assistance.

@KenMasters20XX

This comment has been minimized.

Copy link

KenMasters20XX commented Aug 13, 2018

@kalelkenobi @marcosscriven

I'm posting to confirm that I have a very similar configuration as @kalelkenobi . I'm using a Gigabyte Aero 15X v8, with GTX 1070 Max-Q and yet I'm stuck with Code 43.

I've tried every configuration posted in this repo, as well as in the other dGPU repo, and none of them seem to work.

In Windows, the Nvidia driver installs perfectly fine without complaint, and yet reports Code 43. I've patched my OVMF using the provided PKGBUILD up-thread. I've tried passing my vBIOS ROM separately, alongside, or not at all. I've tried with and without ROM BAR enabled. I've tried SeaBIOS and UEFI both, i440fx and Q35.

Perhaps this information can help someone figure this out but as of now, I am at a bit of a loss. What I have learned is that my particular graphics configuration has the following characteristics:

  1. The card has it's own EEPROM chip, an MXIC MX25U4033E 512KB chip.
  2. I can retrieve what I believe to be the complete BIOS via nvflash as well as GPU-Z; however, there is only a non-UEFI BIOS to be found.
  3. I've gone so far as to dump the flash from the chip myself using an EEPROM programmer.
  4. The dump is 512KB, verified against the chip, but only 169kBytes is actually used and, again, no UEFI; only the PC Type 0 BIOS; the rest is just zero'ed.
  5. I've searched far and wide through the Aero 15X and the MSI GS65 BIOS update files for ANYTHING that might be an nVidia UEFI PE file and found nothing. All of this leads me to believe these cards are NOT UEFI-enabled, and they are NOT being shadowed like other Optimus cards that don't have discrete EEPROM (I could be wrong here).
  6. The card shows up in lspci as a "VGA Controller."
  7. This is not an MXM device, and is Optimus-enabled.
  8. The GTX 1070 Max-Q controls the HDMI and mini-DP ports. If the GPU's driver is disabled those ports will not work.
  9. If I attach an external display, I see the internal QXL card mirrored across the GTX card when the kernel goes into framebuffer mode during boot and I can see Ubuntu's logo and status indicator upon booting the VM (I believe this is VESA). After about 3-4 seconds the system seems to hard lock (although I have not tried to SSH in to confirm) .
  10. I don't see anything from Windows, nor do I see Tianocore's logo upon boot on the external display; this only happens with the Ubuntu splash/status indicator and this is using the default 18.04 Nouveau drivers.
  11. FWIW, all of the cards information shows up in GPU-Z, the BIOS dump from within the VM is exactly the same as it is from outside the VM in bare-metal Windows and from an EEPROM reader directly from the chip. So the BIOS is being passed-through successfully. The only difference is that the GPU shows no clock speed; and believe is in a D3 power-down/sleep state. AFAIK, I have no way of getting it out of this state (due to the Code 43).

Some suppositions on my part:

I don't believe this card has a UEFI 'BIOS', either on it's own discrete EEPROM or in the system firmware. That might be true of all the Max-Q model cards? My guess is that these designs are completely relying on the iGPU at boot and operate in CSM-mode only with a legacy BIOS. I don't think any of these laptops can operate with the iGPU off, nor can any of them disable the internal display or remain functional at bootup with the internal display disabled (if done through a BIOS hack).

At this point, I'm left attempting a few other alternatives, but I think I've fully explored the possibility that the VM isn't getting the correct BIOS -- as far as I can tell, it is. I've used @marcosscriven 's configuration as well as many other iterations, and yet, nothing works for me.

Next steps would be to try the ACS patch (because there is a hidden PCIe HDMI audio component at 0000:01:00.1 that I cannot passthrough).

Or.. to try to use a UEFI-enabled GTX 1070M BIOS patched OVMF (assuming compatibility with Max-Q).

Or.. try patch my own custom Pascal BIOS for the Max-Q, based on combining the 1070M UEFI-enabled BIOS with the 1070 Max-Q and then flashing that to my card (I can flash back with programmer if it fails, so no worries there) and hoping that by effectively turning the card into a UEFI-compatible card that it might help?

Any thoughts, ideas, would be greatly appreciated. Would really like to get this working and it seems I'm right very close and I'm maybe missing something trivial? I get the feeling that maybe I'm spending a lot of time on this BIOS issue and it's something completely different?

Thanks!

@kalelkenobi

This comment has been minimized.

Copy link

kalelkenobi commented Aug 13, 2018

@KenMasters20XX thank you for your intensive testing. I believe I am in the same situation as you are. My GTX 1060 is NOT a Max-Q design, but I’ve jumped through the same hoops as you have trying to confirm that I had in fact a valid BIOS (short of using an eeprom programmer) and came to the same result. The Guest seems to be getting the right BIOS and there is no way to extract a UEFI compatible dump from the card or the BIOS updates. Tried all the same setups you did (q35, i440fx, patches OMVF, regular OVMF, etc...) with no luck. Unfortunately there’s little else I can contribute aside from confirming some of your guesses: my laptop cannot in fact operate with the iGPU or internal display disabled (I tried via unlocked BIOS). I’ve also tried using a downloaded vBIOS that seemed a close match to my own, no luck. Lately I’ve been focusing my attention on the PCI hierarchy, thinking maybe I missed something there. I hadn’t found the hidden PCI device, although I suspected it existed. Do you guys think that could be it? Maybe the HDMI audio needs to be passed on for the card to work properly. bare metal windows seems to be able to use it, even though it doesn’t show up in device manager.

@KenMasters20XX

This comment has been minimized.

Copy link

KenMasters20XX commented Aug 13, 2018

@kalelkenobi I think we're the two users in this thread so far with Pascal cards? I think perhaps everyone else is using Maxwell-based cards and that might make the difference. So far, I've not found any instances online of either an integrated or MXM-based Pascal card being successfully passed-through.

What is interesting is that MXM cards like the 1070M do in fact have a UEFI BIOS; however, the integrated cards, even though they show up as VGA Controller, and have control over the HDMI ports, do not have an associated UEFI module. My guess is these cards simply do not have UEFI-functionality by design? I'm going to retrieve my firmware's Gop Driver and take another look, but IIRC, there was no indication of an Nvidia driver there.

Now, if that's true; then perhaps that's what's causing the Code 43? If not, then there's a UEFI module that I'm simply missing...

BTW; the 'hidden' HDMI Audio device does indeed exist, I've seen it "accidentally" exposed by toggling the power-state via ACPI calls. At various times (seemingly at random) the HDMI device will show up in lspci. This is one of the reasons I'm thinking of using an ACS-patched kernel in my next series of tests, simply to isolate this as a possibility.

Lastly, I'm thinking that perhaps the ACPI tables might be a difference-maker here. I've taken a cursory look at the Aero 15X's SSDT table and I'm guessing that, perhaps like a Hackintosh, there is some incompatibility here between what's been posted and what the Nvidia driver is expecting to see for an integrated Pascal GPU.

Hard to really say with any certainty since this amounts to shooting in the dark.

@kalelkenobi

This comment has been minimized.

Copy link

kalelkenobi commented Aug 13, 2018

@KenMasters20XX I'm no nowhere near as versed as I'm guessing you are on BIOS and ACPI inner workings, that is why I got stumped on the no-UEFI dump front and essentially gave up on that angle. This is simply way over my head, that's why I'll try and look at the HDMI audio angle. I was able to find a way too old bug about this: https://bugs.freedesktop.org/show_bug.cgi?id=75985
it seems to be a nvidia proprietary driver issue, that they simply choose to ignore. There is a workaround, described in the bug, to make HDMI audio show up reliably and the device is indeed in the same IOMMU group as my 1060:

IOMMU Group 1 00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x16) [8086:1901] (rev 07)
IOMMU Group 1 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP106M [GeForce GTX 1060 Mobile] [10de:1c20] (rev a1)
IOMMU Group 1 01:00.1 Audio device [0403]: NVIDIA Corporation GP106 High Definition Audio Controller [10de:10f1] (rev a1)

I'll try passing it over to the guest and see if that makes any difference, but I'm not holding my breath. As you pointed out very few people tried this on pascal optimus laptops and I have not been able to find anyone who actually succeded.

@ArtyomFR

This comment has been minimized.

Copy link

ArtyomFR commented Mar 6, 2019

@brandsimon Here is what I use in my XML to hide kvm:

<qemu:arg value='-cpu'/>
<qemu:arg value='host,hv_time,kvm=off,hv_vendor_id=null,-hypervisor'/>

I don't know what really does the trick in hidding KVM.

@T-vK Do you understand how to apply the patch? Because OVMF edition is pretty new to me.

@T-vK

This comment has been minimized.

Copy link

T-vK commented Mar 6, 2019

@ArtyomFR Since you are on Arch, using the PKGBUILD that has been linked somewhere in this issue would probably be the easiest way.

@ArtyomFR

This comment has been minimized.

Copy link

ArtyomFR commented Mar 6, 2019

@T-vK I've just tried the PKGBUILD and build the patch just fine, but i'm still getting th error. Maybe this computer is just not compatible.

@brandsimon

This comment has been minimized.

Copy link

brandsimon commented Mar 7, 2019

@ArtyomFR Thank you very much, with -hypervisor I have real cores, not virtualized ones 👍
(But still error 43)

@brandsimon

This comment has been minimized.

Copy link

brandsimon commented Mar 7, 2019

Ah, I should have looked more carefully. It says now:

Sockets: 1
Cores: 4
Logical processors: 4
Virtualization: Enabled

Virtualization: Enabled confuses me a bit.
@ArtyomFR Do you have the same output or a different one?

@T-vK

This comment has been minimized.

Copy link

T-vK commented Mar 7, 2019

@ArtyomFR You have a GTX 9xxM, so the odds that it works are pretty good looking at the data we have.
Can you post your configuration? There are many things people have reported fixed error 43 for them. The pci id thing for instance that I mentioned recently in this issue.

@ArtyomFR

This comment has been minimized.

Copy link

ArtyomFR commented Mar 7, 2019

@brandsimon It's good, your windows now think it is not virtualized. I think the Virtualization: Enabled says that your vm is ready to do virtualisation and not if the system itself is virtualized.

@T-vK Here are my configurations:
VM XML:

<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  <name>win10</name>
  <uuid>2822febf-8d9c-461c-9a1c-c22c38c5a89d</uuid>
  <metadata>
    <libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0">
      <libosinfo:os id="http://microsoft.com/win/10"/>
    </libosinfo:libosinfo>
  </metadata>
  <memory unit='KiB'>4194304</memory>
  <currentMemory unit='KiB'>4194304</currentMemory>
  <vcpu placement='static'>4</vcpu>
  <os>
    <type arch='x86_64' machine='pc-q35-3.1'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/ovmf/x64/OVMF_CODE.fd</loader>
    <nvram>/var/lib/libvirt/qemu/nvram/win10_VARS.fd</nvram>
    <boot dev='hd'/>
    <bootmenu enable='no'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
      <vendor_id state='on' value='1234567890ab'/>
    </hyperv>
    <kvm>
      <hidden state='on'/>
    </kvm>
    <vmport state='off'/>
  </features>
  <cpu mode='host-model' check='partial'>
    <model fallback='allow'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
    <timer name='hypervclock' present='yes'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <pm>
    <suspend-to-mem enabled='no'/>
    <suspend-to-disk enabled='no'/>
  </pm>
  <devices>
    <emulator>/usr/bin/qemu-system-x86_64</emulator>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <source dev='/dev/sda'/>
      <target dev='vdb' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </disk>
    <controller type='usb' index='0' model='qemu-xhci' ports='15'>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
    </controller>
    <controller type='sata' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pcie-root'/>
    <controller type='pci' index='1' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='1' port='0x10'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
    </controller>
    <controller type='pci' index='2' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='2' port='0x11'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
    </controller>
    <controller type='pci' index='3' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='3' port='0x12'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
    </controller>
    <controller type='pci' index='4' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='4' port='0x13'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
    </controller>
    <controller type='pci' index='5' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='5' port='0x14'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
    </controller>
    <controller type='pci' index='6' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='6' port='0x15'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
    </controller>
    <controller type='pci' index='7' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='7' port='0x16'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
    </controller>
    <interface type='network'>
      <mac address='52:54:00:f4:23:ae'/>
      <source network='default'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <input type='tablet' bus='usb'>
      <address type='usb' bus='0' port='1'/>
    </input>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='spice' autoport='yes'>
      <listen type='address'/>
      <image compression='off'/>
    </graphics>
    <sound model='ich9'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1b' function='0x0'/>
    </sound>
    <video>
      <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
    </video>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
      </source>
      <rom bar='off'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0' multifunction='on'/>
    </hostdev>
    <redirdev bus='usb' type='spicevmc'>
      <address type='usb' bus='0' port='2'/>
    </redirdev>
    <redirdev bus='usb' type='spicevmc'>
      <address type='usb' bus='0' port='3'/>
    </redirdev>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </memballoon>
  </devices>
  <qemu:commandline>
    <qemu:arg value='-cpu'/>
    <qemu:arg value='host,hv_time,kvm=off,hv_vendor_id=null,-hypervisor'/>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.hostdev0.x-pci-sub-vendor-id=0x1043'/>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.hostdev0.x-pci-sub-device-id=0x1130'/>
  </qemu:commandline>
</domain>

qemu.conf (without comments)

user = "gaetan"
group = "wheel"
nvram = [ "/usr/share/ovmf/x64/OVMF_CODE.fd:/usr/share/ovmf/x64/OVMF_VARS.fd" ]
namespaces = []

lspci -tv:

-[0000:00]-+-00.0  Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor Host Bridge/DRAM Registers
           +-01.0-[01]----00.0  NVIDIA Corporation GM107M [GeForce GTX 950M]
           +-02.0  Intel Corporation HD Graphics 530
           +-04.0  Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor Thermal Subsystem
           +-14.0  Intel Corporation 100 Series/C230 Series Chipset Family USB 3.0 xHCI Controller
           +-14.2  Intel Corporation 100 Series/C230 Series Chipset Family Thermal Subsystem
           +-16.0  Intel Corporation 100 Series/C230 Series Chipset Family MEI Controller #1
           +-17.0  Intel Corporation HM170/QM170 Chipset SATA Controller [AHCI Mode]
           +-1c.0-[02]----00.0  Realtek Semiconductor Co., Ltd. RTL8821AE 802.11ac PCIe Wireless Network Adapter
           +-1c.3-[03]--+-00.0  Realtek Semiconductor Co., Ltd. RTL8411B PCI Express Card Reader
           |            \-00.1  Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller
           +-1f.0  Intel Corporation HM170 Chipset LPC/eSPI Controller
           +-1f.2  Intel Corporation 100 Series/C230 Series Chipset Family Power Management Controller
           +-1f.3  Intel Corporation 100 Series/C230 Series Chipset Family HD Audio Controller
           \-1f.4  Intel Corporation 100 Series/C230 Series Chipset Family SMBus

Tell me if you need more.
Also how could I do the PCI thing in my xml and with what id acording to my lspci?
Do you know if there is a detailled log that can show me what's causing the error 43 on the windows guest?

@bubbleguuum

This comment has been minimized.

Copy link

bubbleguuum commented Mar 8, 2019

In the same situation of many, trying passthrough to Windows on my ThinkPad P72 with Intel UHD 630 + Quadro P600. Like most modern Optimus laptops, this is a muxless setup with video ports managed by the dGPU. Following Misairu's guide, I went up to the dreaded error 43. I successfully dumped the vbios with VBiosFinder and it is non-UEFI aware.
I pass it to vfio-pci with the romfile option but it doesn't help, pretty much as expected when I read this huge thread afterwards, looking for a solution. And after I read all of it, it appears there is no point for me attempting to generate a custom BIOS including the vBIOS as well as other hacks since others tried it and it failed.

So, to conclude, can we tell that there is absolutely no solution that currently works for such Pascal-based laptops ? It would had been so nice to display that Windows guest on an external monitor with full GPU passthrough :(

@T-vK

This comment has been minimized.

Copy link

T-vK commented Mar 8, 2019

@ArtyomFR Looks like you have already added plenty of root ports? I don't really understand the XML format. I have only used the command line options as seen in here: https://github.com/T-vK/MobilePassThrough/blob/master/start-vm.sh

@ArtyomFR

This comment has been minimized.

Copy link

ArtyomFR commented Mar 9, 2019

@T-vK I did it! After cheking the root port corresponding the the GPU (they are in the same iommu groups) I observed that it wasn't set as the physical one would be in the VM. So I set the slot id and now my gpu is recognised and usable.

@T-vK

This comment has been minimized.

Copy link

T-vK commented Mar 9, 2019

@ArtyomFR Great to hear that! I'll add your ASUS X550VX to the list. Can you tell me what CPU your device has?
It would also be nice if you could share the config you ended up using. I'm sure this will be helpful to someone in the future.
Have you tried running games, yet? Do they behave correctly?
People have reported that some applications only worked when they used RDP on their 9xxM cards.

@brandsimon You mentioned that you managed to pass through the dGPU and share the iGPU with the VM at the same time. Have you tried doing that with your Windows VM?
I'm also curious who gets access to the display outputs when both host and guest can access the iGPU.
And can you share the config that you used to get your Linux VM working so that future readers can benefit from it? That would be nice.

@ArtyomFR

This comment has been minimized.

Copy link

ArtyomFR commented Mar 9, 2019

@T-vK It's an Intel core I5-6300HQ and here is my XML. I've only test one game with RDP using remoteFX and it don't seem to give me the best in game performance. I will try the Nvidia Gamestream method later. Do you have others method in mind that I could try?

@T-vK

This comment has been minimized.

Copy link

T-vK commented Mar 10, 2019

Yes, LookingGlass would be ideal. But it requires that the VM has a frame buffer, which often isn't the case. From what I understand one cause can be that no external monitor is plugged in which could be solved using EDID Dummy Plugs (e.g. for HDMI or for Mini DisplayPort).
And another cause can be that the dGPU wants to use the iGPUs framebuffer. And that could maybe be solved using GVT-G to share the iGPU with the VM as well. (I haven't heard of anyone having success with that on a Windows VM yet though.)
But I recently started modding the BIOS of a notebook to enable features that weren't available by default and I stumbled across a few pretty interesting looking options regarding display outputs and muxed / muxless that are hidden by default:

Maybe these options could be used to our advantage.

@ArtyomFR

This comment has been minimized.

Copy link

ArtyomFR commented Mar 10, 2019

@T-vK So, i've set my VM and my host to use looking glass, the only point that doesn't work right now is the VM client:

[I]     CaptureFactory.h:83   | CaptureFactory::DetectDevice   | Trying DXGI
[I]             DXGI.cpp:232  | Capture::DXGI::Initialize      | Device Descripion: Microsoft Basic Render Driver
[I]             DXGI.cpp:233  | Capture::DXGI::Initialize      | Device Vendor ID : 0x1414
[I]             DXGI.cpp:234  | Capture::DXGI::Initialize      | Device Device ID : 0x8c
[I]             DXGI.cpp:235  | Capture::DXGI::Initialize      | Device Video Mem : 0 MB
[I]             DXGI.cpp:236  | Capture::DXGI::Initialize      | Device Sys Mem   : 0 MB
[I]             DXGI.cpp:237  | Capture::DXGI::Initialize      | Shared Sys Mem   : 2558 MB
[I]             DXGI.cpp:241  | Capture::DXGI::Initialize      | Capture Size     : 1920 x 1080
[E]             DXGI.cpp:293  | Capture::DXGI::Initialize      | Failed to create D3D11 device: 0x887a0004 (LÆinterface de pÚriphÚrique ou niveau de fonctionnalitÚ spÚcifiÚ nÆest pas pris en charge sur ce systÞme.)
[E]     CaptureFactory.h:92   | CaptureFactory::DetectDevice   | Failed to initialize a capture device
Unable to configure a capture device

I think it could not found other monitor than the qxl one, so it can't work properly as you said.
I think the Intel GVT-G is a good solution because it doesn't involve modifying the BIOS.
Talking about BIOS are you working on the vBios that can be pass to the VM througt the OVMF, the OVMF itself or the "physical" BIOS?
The Nvidia Gamestream doesn't work for me right now because the "shield" tab that allow the user to stream games isn't showing on my VM wich could be caused by many thing.

@brandsimon

This comment has been minimized.

Copy link

brandsimon commented Mar 10, 2019

@T-vK
Yes, I tried the same with Windows, but unfortunately I am not even able to get gvt-g working with Windows.
I only get the message: Guest has not initialized the display (yet)
Someone posted a similar issue on the vfio-users mailinglist, maybe there will be a solution.

As far as I can tell, no one gets access to display outputs. I only tested HDMI so far, but neither the VM nor the guest got an output in xrandr. I also dont remeber dmesg output (but I am not sure about this).
I can't test this at the moment, because I dont have a monitor here.
The host also did not get the output, when the VM was not running, but the device was already created. I just realized that now, the next time I am able to test this, I will test if it is because of the kernel parameters or because of the created device.

I shared the vga-device-config before, but here is the complete config:

/usr/bin/qemu-system-x86_64
	-bios /path/to/OVMF_CODE_patched.fd
	-L ovmf64/
	-enable-kvm
	-M q35
	-cpu host,kvm=off,hv_time,hv_vendor_id=nullzwei
	-smp 4,sockets=1,cores=4,threads=1
	-m 16384
	-mem-prealloc 
	-name Tensorflow
	-rtc base=localtime 
	-net nic,addr=0xa,
	-net user,hostfwd=tcp::22221-:22222
	-vga none
	-display gtk,gl=on
	-device vfio-pci,bus=pcie.0,addr=02.0,sysfsdev=/sys/devices/pci0000:00/0000:00:02.0/fe4e8915-51ea-40ec-8adb-234f23e7fab0/,x-igd-opregion=on,display=on
	-device ioh3420,bus=pcie.0,addr=01.0,multifunction=on,port=1,chassis=1,id=root.1
	-device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,romfile=/path/to/vbiosfinder/vbios_10de_13b6.rom
	-drive file=arch-gaming.qcow2,id=win,format=qcow2,if=none
	-device ide-hd,bus=ide.0,drive=win 
	-device nec-usb-xhci,id=usb
	-device usb-host,vendorid=0x24c6,productid=0x5d04,id=hostdev0,bus=usb.0
	-device usb-host,vendorid=0x04d9,productid=0xfa58,id=hostdev1,bus=usb.0 
	-boot c
@ArtyomFR

This comment has been minimized.

Copy link

ArtyomFR commented Mar 10, 2019

@T-vK I set up intel gvt-g in my VM and it create a new virtual screen but nothing change on the nvidia card. The system still think there is no screen attach to the card and looking glass don't work.

@ArtyomFR

This comment has been minimized.

Copy link

ArtyomFR commented Mar 10, 2019

@T-vK OK, I've now correctly set the Intel gvt-g, i can open the Nvidia control pannel and switch bewteen the GPU and set applications to be executed by the wanted GPU. But still looking-glass-host.exe doesn't work because there is no display directely attached to the Nvidia card.
But I think I've succesfully manage to recreate the Optimus behavior under a virtualized environment. (with old propriatary drivers it's doesn't work with the lastest nvidia drivers)

@T-vK

This comment has been minimized.

Copy link

T-vK commented Mar 10, 2019

@ArtyomFR I'm modifying the actual BIOS. The vBIOS is technically in there, but I'm not actually touching it.
It's great to hear you managed to get GVT-g to work on a Windows guest btw. I haven't heard of anyone else doing that successfully. It would be neat if you could share detail on what you had to do to get it to work. E.g. your config and what driver version you had to use.

Have you tried connecting an external monitor to see if that makes any difference? It would be interesting to hear if you have the same behavior as brandsimon (neither the guest nor the host can access the external monitor outputs).

@brandsimon
Thanks for sharing. Did I understand you correctly that your external display outputs wouldn't work at all while sharing the iGPU via GVT-g? Or did the internal display go black as well?
You are able to use the external ports while only passing through the dGPU though, right?

@T-vK

This comment has been minimized.

Copy link

T-vK commented Mar 11, 2019

I have a new device working (Gigabyte P35X v4):

I think the GTX 980M is the most powerful card we have managed to pass through yet.
To my surprise Looking Glass appears to be working. Maybe that's just because I was connected via RDP at the same time. I haven't had time to test games yet.

@ArtyomFR

This comment has been minimized.

Copy link

ArtyomFR commented Mar 11, 2019

@T-vK Did you use Intel GVT-G? Can you post your current configuration? What drivers did you use and how many screen are being detected?
I think a reboot break my virtual Optimus setup, the intel screen is no more detected so the nvidia control pannel won't show up. If I plug a screen it goes to the host.

@T-vK

This comment has been minimized.

Copy link

T-vK commented Mar 11, 2019

I did not use GVT-g and I didn't manually install the Nvidia driver, Windows 10 installed it for me.

@T-vK

This comment has been minimized.

Copy link

T-vK commented Mar 11, 2019

My config was pretty much just the start-vm.sh script from the MobilePassThrough project. I just made some small adjustments to allow Looking Glass to work.

#!/usr/bin/env bash

SCRIPT_DIR=$(cd "$(dirname "$0")"; pwd)
PROJECT_DIR="${SCRIPT_DIR}"
UTILS_DIR="${PROJECT_DIR}/utils"
DISTRO=$("${UTILS_DIR}/distro-info")
DISTRO_UTILS_DIR="${UTILS_DIR}/${DISTRO}"
VM_FILES_DIR="${PROJECT_DIR}/vm-files"

VM_DISK_SIZE=20G # Changing this has no effect after `prepare-vm` has already been called to create a VM disk
RAM_SIZE=4G
CPU_CORE_COUNT=3
INSTALL_IMG="${VM_FILES_DIR}/windows10.iso"
DRIVE_IMG="${VM_FILES_DIR}/WindowsVM.img"
SMB_SHARE_FOLDER="${VM_FILES_DIR}/vmshare"
#GPU_ROM="${VM_FILES_DIR}/vbios-roms/vbios.rom"
WIN_VARS="${VM_FILES_DIR}/WIN_VARS.fd"
OVMF_CODE="/usr/share/OVMF/OVMF_CODE.fd"
VIRTIO_WIN_IMG="/usr/share/virtio-win/virtio-win.iso"
GPU_PCI_ADDRESS=01:00.0
# If you don't use Bumblebee, you need to set the GPU_PCI_ADDRESS manually (see output of lspci) and you have to remove all following occurences of optirun in this script
# This script has only been tested with Bumblebee enabled.

if [ ! -f "${DRIVE_IMG}" ]; then
    # If the VM drive doesn't exist, run the prepare-vm script to create it
    sudo $DISTRO_UTILS_DIR/prepare-vm
fi

MAC_ADDRESS=$(cat "${VM_FILES_DIR}/MAC_ADDRESS.txt")


GPU_IDS=$(optirun lspci -n -s "${GPU_PCI_ADDRESS}" | grep -oP "\w+:\w+" | tail -1)
GPU_VENDOR_ID=$(echo "${GPU_IDS}" | cut -d ":" -f1)
GPU_DEVICE_ID=$(echo "${GPU_IDS}" | cut -d ":" -f2)
GPU_SS_IDS=$(optirun lspci -vnn -d "${GPU_IDS}" | grep "Subsystem:" | grep -oP "\w+:\w+")
GPU_SS_VENDOR_ID=$(echo "${GPU_SS_IDS}" | cut -d ":" -f1)
GPU_SS_DEVICE_ID=$(echo "${GPU_SS_IDS}" | cut -d ":" -f2)

echo "GPU_PCI_ADDRESS: ${GPU_PCI_ADDRESS}"
echo "GPU_IDS: $GPU_IDS"
echo "GPU_VENDOR_ID: $GPU_VENDOR_ID"
echo "GPU_DEVICE_ID: $GPU_DEVICE_ID"
echo "GPU_SS_IDS: $GPU_SS_IDS"
echo "GPU_SS_VENDOR_ID: $GPU_SS_VENDOR_ID"
echo "GPU_SS_DEVICE_ID: $GPU_SS_DEVICE_ID"

#sudo echo "options vfio-pci ids=${GPU_VENDOR_ID}:${GPU_DEVICE_ID}" > /etc/modprobe.d/vfio.conf

echo "Loading vfio-pci kernel module..."
sudo modprobe vfio-pci
echo "Unbinding Nvidia driver from GPU..."
sudo echo "0000:${GPU_PCI_ADDRESS}" > "/sys/bus/pci/devices/0000:${GPU_PCI_ADDRESS}/driver/unbind"

echo "Binding VFIO driver to GPU..."
sudo echo "${GPU_VENDOR_ID} ${GPU_DEVICE_ID}" > "/sys/bus/pci/drivers/vfio-pci/new_id"
#sudo echo "8086:1901" > "/sys/bus/pci/drivers/vfio-pci/new_id"
# TODO: Make sure to also do the rebind for the other devices that are in the same iommu group (exclude stuff like PCI Bridge root ports that don't have vfio drivers)


# This ensures, that the VBIOS will not be overridden if GPU_ROM is not set
if [ -z "$GPU_ROM" ]; then
    GPU_ROM_PARAM=""
else
    GPU_ROM_PARAM=",romfile=${GPU_ROM}"
fi

# This ensures, that the -net parameter for smb sharing won't be passed if SMB_SHARE_FOLDER is not set
if [ -z "$SMB_SHARE_FOLDER" ]; then
    SMB_SHARE_PARAM=""
else
    SMB_SHARE_PARAM="-net user,smb=${SMB_SHARE_FOLDER}"
fi

echo "Starting the Virtual Machine"
# Refer https://github.com/saveriomiroddi/qemu-pinning for how to set your cpu affinity properly
sudo qemu-system-x86_64 \
  -name "Windows10-QEMU" \
  -machine type=q35,accel=kvm \
  -global ICH9-LPC.disable_s3=1 \
  -global ICH9-LPC.disable_s4=1 \
  -enable-kvm \
  -cpu host,kvm=off,hv_vapic,hv_relaxed,hv_spinlocks=0x1fff,hv_time,hv_vendor_id=12alphanum \
  -smp ${CPU_CORE_COUNT} \
  -m ${RAM_SIZE} \
  -mem-prealloc \
  -balloon none \
  -rtc clock=host,base=localtime \
  -nographic \
  -serial none \
  -parallel none \
  -boot menu=on \
  -boot order=c \
  -k en-us \
  ${SMB_SHARE_PARAM} \
  -spice port=5900,addr=127.0.0.1,disable-ticketing \
  -drive "if=pflash,format=raw,readonly=on,file=${OVMF_CODE}" \
  -drive "if=pflash,format=raw,file=${WIN_VARS}" \
  -drive "file=${INSTALL_IMG},index=1,media=cdrom" \
  -drive "file=${VIRTIO_WIN_IMG},index=2,media=cdrom" \
  -drive "id=disk0,if=virtio,cache.direct=on,if=virtio,aio=native,format=raw,file=${DRIVE_IMG}" \
  -netdev "type=tap,id=net0,ifname=tap0,script=${VM_FILES_DIR}/tap_ifup,downscript=${VM_FILES_DIR}/tap_ifdown,vhost=on" \
  -device ich9-intel-hda \
  -device hda-output \
  -device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \
  -device "vfio-pci,host=${GPU_PCI_ADDRESS},bus=root.1,addr=00.0,x-pci-sub-device-id=0x${GPU_SS_DEVICE_ID},x-pci-sub-vendor-id=0x${GPU_SS_VENDOR_ID},multifunction=on${GPU_ROM_PARAM}" \
  -device virtio-net-pci,netdev=net0,addr=19.0,mac=${MAC_ADDRESS} \
  -device pci-bridge,addr=12.0,chassis_nr=2,id=head.2 \
  -device virtio-keyboard-pci,bus=head.2,addr=03.0,display=video.2 \
  -device virtio-mouse-pci,bus=head.2,addr=04.0,display=video.2 \
  -usb \
  -device usb-host,vendorid=0x0b95,productid=0x1790 \
  -device ivshmem-plain,memdev=ivshmem,bus=pcie.0 \
  -object memory-backend-file,id=ivshmem,share=on,mem-path=/dev/shm/looking-glass,size=32M \
  -device qxl,bus=pcie.0,addr=1c.4,id=video.2 \
  #-vga qxl \
  #-device usb-host,hostbus=3,hostaddr=9 \
  #-device usb-tablet \

# This should get executed when the vm exits
sudo echo "0000:${GPU_PCI_ADDRESS}" > "/sys/bus/pci/drivers/vfio-pci/0000:${GPU_PCI_ADDRESS}/driver/unbind"
sudo echo "OFF" >> /proc/acpi/bbswitch
@ArtyomFR

This comment has been minimized.

Copy link

ArtyomFR commented Mar 11, 2019

@T-vK I will try to adapt your configuration to XML system. What do you think was the criticals points to get Looking-glass working?

@T-vK

This comment has been minimized.

Copy link

T-vK commented Mar 11, 2019

  -device ivshmem-plain,memdev=ivshmem,bus=pcie.0 \
  -object memory-backend-file,id=ivshmem,share=on,mem-path=/dev/shm/looking-glass,size=32M

These ones are the critical ones, but that is mentioned in the quickstart guide: https://looking-glass.hostfission.com/quickstart

Oh and as I expected it only works while I am using RDP. Windows doesn't really say that it detected any monitors, it just says that I can't do anything in the display/graphics settings while I'm connected via RDP. The internal display and the external HDMI and Mini DP ports stay with the host. I cant test the VGA port, but I expect it stays with the host as well.

When I disconnect from RDP, then Looking Glass freezes the image and says "System Paused".

Edit:
I wonder if we could use Intel WiDi to get a fake display into the VM. I mean RDP does appear to work to some degree, but I'm pretty sure that lots games and other 3D applications won't run at all or at least not properly in an RDP environment. At least that's what the others with an 8xxM/9xxM card setup have reported.

@brandsimon

This comment has been minimized.

Copy link

brandsimon commented Mar 12, 2019

@T-vK
The internal display is working correct.

My host system is booting with kernel parameter (intel_iommu=on i915.enable_gvt=1 iommu=pt vfio_pci.ids=10de:13b6 efifb=off) and at startup the gvt-g device is created.
I also have a modprobe rule: blacklist nouveau
Now I dont get any external displays showing via xrandr, when connected.

I am not sure about this, but I think I needed to remove the i915 kernel parameter and also needed to load nouveau again to get external displays showing in the host system. I cant test it at the moment. I keep you updated when I can test it.

Edit: It could also be that just loading nouveau is enough.

@T-vK

This comment has been minimized.

Copy link

T-vK commented Mar 13, 2019

Can you tell me what iommu=pt does? I couldn't find documentation on that.

@brandsimon

This comment has been minimized.

Copy link

brandsimon commented Mar 14, 2019

@T-vK

The pt option only enables IOMMU for devices used in passthrough and will provide better 
host performance. However, the option may not be supported on all hardware. Revert to 
previous option if the pt option doesn't work for your host.

I found this here: https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/installation_guide/appe-configuring_a_hypervisor_host_for_pci_passthrough

@T-vK

This comment has been minimized.

Copy link

T-vK commented Mar 14, 2019

@brandsimon Thanks.

I wonder if it would be possible to use a USB to HDMI adapter and simply pass that through to the VM to make it think there is a monitor. Or maybe that adapter in combination with an EDID dummy plug.

@T-vK

This comment has been minimized.

Copy link

T-vK commented Mar 18, 2019

@ArtyomFR Apparently there is an alternative to Looking Glass called dma-buf, which you can use if you have GVT-g working.

Also very interesting: https://lists.01.org/pipermail/igvt-g/2018-April/001409.html

After quite a bit of experimentation and and another OVMF patch [4], we
finally got GVT-g display output working with a UEFI + Windows guest.
This resulted in both the GVT-g iGPU and the Nvidia dGPU initialising
correctly, and showing up in Windows Task Manager's performance tab.

So maybe that could even solve error 43 to some degree.

@ktod16

This comment has been minimized.

Copy link

ktod16 commented Mar 19, 2019

Hi let me share my updates on this..

I am more or less in the same situation with brandsimon. As soon as the dGPU is assigned to vfio pci driver the HDMI stops working. When I plug the HDMI cable fedora is showing the multi monitor menu(extent right/left etc.) but irrespective to what I select nothing happens.
In windows the same happens if I disable dGPU in Device manager. If i keep only intel gpu I loose external monitors.
I think this is some how related to BIOS video settings. In bios I can select Hybrid or Discrete only. I don't have the option o selecting Intel only. In the operating system if somehow Intel remains alone it is not supporting eternal monitors.
Now back to Passthrough.
Out of curiositty I created a Win7 VM (Q35/BIOS)and I managed to install Nvidia driver(modified to include new hardware ID) in Windows7. Now in device manager the device is working properly but when I open Solitaire (game) I get a windows message that "3d acceleration is not enabled or my video card don't support it........" I'm using KRDC to access the VM and I kept only the dGPU as display adapter(no Spice, no QXL nothing). Also no external monitor detected.
Back in Windows10 I managed to set GVT-g and windows automatically installs Intel driver(adapter correctly identified) but for Nvidia I have to do it manually with the modified nvidia driver. The problem is Code 43. Again no external monitor detected and I use KRDC. Also my windows OS is installed as BIOS, maybe that is the issue.
One more thing: using GVT-g introduces some sort of lag in Windows. If I want to drag a window, the mouse is moving correctly but the window remains a little bit behind (like a delayed movement).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.