Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PCI Passthrough/Discrete Device Assignment for WSL2. #5492

Open
hameerabbasi opened this issue Jun 26, 2020 · 20 comments
Open

PCI Passthrough/Discrete Device Assignment for WSL2. #5492

hameerabbasi opened this issue Jun 26, 2020 · 20 comments
Labels

Comments

@hameerabbasi
Copy link

Is your feature request related to a problem? Please describe.
Related to #1788 but different. I'd like to pass through my AMD GPU in its entirety to the guest VM for ROCm workloads. The solution described only works on CUDA but not anyplace else.

Describe the solution you'd like
I'd like DDA/PCI passthrough for WSL2, this will allow any device to be passed through. Even if it doesn't stay connected to the host, this is exactly the behaviour I want. Since WSL2 is based on Hyper-V, and HyperV has passthrough support, and it's even present in Windows 10 not just Windows server according to some reports, this should be doable as a feature.

Describe alternatives you've considered
Passing through just ROCm won't work as there's no ROCm for Windows 10. In addition, I'd like nested virtualization (Docker/KVM) on the guest.

@sr229
Copy link

sr229 commented Jun 26, 2020

nested virtualization would be hard as WSL2 distros don't exactly use multiple VMs per distro, they use one common VM for everything that are namespaced by a custom init (see #994), this would also make PCI passthrough hard since you'll be sharing this for your entire VM, not just one distro, if you're interested how GPU support works, Canonical gave an overview.

@sirredbeard
Copy link
Contributor

Nested virtualization is available on Dev Channel builds of Windows 10. You have to set nestedVirtualization=true in .wslconfig, terminate any running WSL instances with wsl.exe --terminate, restart LxssManager, and then re-open Ubuntu. You can then check with lscpu or kvm-ok.

I have a guide to setting up a WSL environment for maximum KVM performance here.

Docker isn't virtualization, it's a container, and it's already working on WSL 2.

@hameerabbasi
Copy link
Author

Thanks, but my main use is still with DDA.

@sr229
Copy link

sr229 commented Jul 11, 2020

Thanks, but my main use is still with DDA.

I don't get why you need DDA though @hameerabbasi, we already have a virtualized GPU via /dev/dxg. AFAIK ROCm support is in the works (AMD already has support via DirectML).

@hameerabbasi
Copy link
Author

I want to run KVM as a hypervisor on WSL, and pass through the discrete GPU completely.

DieectML doesn’t help, as stuff compiled for ROCm wouldn’t work with it, neither would it aid those wishing to work on the ROCm stack.

ROCm support for Windows (if that’s what we’re talking about) was never officially announced. AFAICT it was just one engineer who said it was coming in an unofficial capacity.

If ROCm support for WSL is what we’re talking about, would you mind pointing me to a reference?

@sr229
Copy link

sr229 commented Jul 12, 2020

If you were in Build 2020, this was announced, if you weren't the highlights about it is available.

@supersat
Copy link

supersat commented Jul 9, 2021

I'm looking to support a custom FPGA accelerator--exposed to the host as a PCIe device--within WSL2. It seems like DDA is the way to do that, but as far as I can tell it's 1) not supported in Windows 10, and 2) not supported for lightweight VMs like those used by WSL2, only full Hyper-V VMs. Is this correct? If so, it would be nice to address these issues.

@judge2020
Copy link

If you were in Build 2020, this was announced, if you weren't the highlights about it is available.

This is not the same - passing through, for example, an entire GPU isn't the same as the CUDA translation layer that hits the hosts' driver compute stack. This would be most useful for when a second GPU is plugged into your Windows 11 host, you disable it in device manager, then pass through the entire device to WSL which would then allow it to be passed to QEMU/KVM as a whole PCI device.

@mcmordie
Copy link

I'm looking to support a custom FPGA accelerator--exposed to the host as a PCIe device--within WSL2. It seems like DDA is the way to do that, but as far as I can tell it's 1) not supported in Windows 10, and 2) not supported for lightweight VMs like those used by WSL2, only full Hyper-V VMs. Is this correct? If so, it would be nice to address these issues.

I have exactly the same issue. Looking at other virtualization and even dual-booting but would way rather run in WSL2 if possible.

@Quarky93
Copy link

I'm looking to support a custom FPGA accelerator--exposed to the host as a PCIe device--within WSL2. It seems like DDA is the way to do that, but as far as I can tell it's 1) not supported in Windows 10, and 2) not supported for lightweight VMs like those used by WSL2, only full Hyper-V VMs. Is this correct? If so, it would be nice to address these issues.

I would also like to passthrough a custom FPGA. Did you figure out a solution?

@asicguy
Copy link

asicguy commented Apr 27, 2022

I'm looking to support a custom FPGA accelerator--exposed to the host as a PCIe device--within WSL2. It seems like DDA is the way to do that, but as far as I can tell it's 1) not supported in Windows 10, and 2) not supported for lightweight VMs like those used by WSL2, only full Hyper-V VMs. Is this correct? If so, it would be nice to address these issues.

I would also like to passthrough a custom FPGA. Did you figure out a solution?

I would like to piggyback off this also as this is exactly what I would like to accomplish also, specifically a thunderbolt attached Xilinx Alveo PCIe card.

@stephanGarland
Copy link

I also would be interested in this being enabled - my use case is a Linux-only ADC card. I was able to get the module compiled and loaded in WSL2 running Debian 10, but without the device being accessible, it's useless.

@ANONIMNIQ
Copy link

C'mon Microsoft team, make this possible! We need full PCI GPU passthrough in WSL!

@r12f
Copy link

r12f commented Mar 11, 2023

I also would like to access my smartnic on my machine via WSL. This is not going to be possible without PCIe passthrough too.

@afiser
Copy link

afiser commented Apr 12, 2023

Would love access to this feature to test graphics heavy applications with a secondary GPU from within a WSL VM

@rtomasi75
Copy link

Just chiming in here, to say that I also would like to access a custom fpga that is exposed to the host via PCIe from WSL, since a specific version of CentOS is needed to flash it. Dual boot seems to be the only alternative elsewise...

@shaictal
Copy link

shaictal commented Jan 8, 2024

  • 1

@cjersey
Copy link

cjersey commented Jan 18, 2024

I would really like to have this feature for using a coral device on WSL.

@amcnamara
Copy link

I would like to play around with a bluefield dpu, being able to pass through arbitrary pcie devices would be really helpful here.

@chaos0frenzy
Copy link

i also would very much like this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests