Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can DirectML help run a CUDA based AI App on AMD GPU? #30

Closed
SomeAB opened this issue Jun 30, 2020 · 9 comments
Closed

Can DirectML help run a CUDA based AI App on AMD GPU? #30

SomeAB opened this issue Jun 30, 2020 · 9 comments

Comments

@SomeAB
Copy link

SomeAB commented Jun 30, 2020

Hello I came across DirectML as I was looking for setting up the following app by facebookresearch on a local windows10 machine. As I don't have Nvidia card, but rather a AMD Vega64, I have not been able to run it so far.

I read in DirectML documentation that it can run cross-platform code, so I'm wondering if its possible to run the below mentioned app on a Windows 10 PC (Ubuntu subsystem?) ??

https://github.com/facebookresearch/pifuhd

@PatriceVignola
Copy link
Contributor

@SomeAB

The app you linked is using PyTorch. We don't have a DirectML backend for PyTorch at the moment, but this is definitely something we could be interested in supporting in the future if there is a demand from the community.

@SomeAB
Copy link
Author

SomeAB commented Jun 30, 2020

Hi Patrice. Thats good to hear, but I briefly read at one point that I can somehow achieve it using onnxruntime. Onnx does support Directml, but a person needs to build it manually. Any idea how? Also, If there was a easy pip3 command for Onnx-Directml, it would be great.

Also are there any video/walkthrough tutorials for Directml yet? perhaps some showing the samples listed on microsoft website's DirectML docs section?

@PatriceVignola
Copy link
Contributor

I'm not very familiar with this app, but as far as I can tell, it exclusively uses PyTorch. It's true that you can convert PyTorch models to ONNX models and use onnxruntime to run them, but you would need to swap the PyTorch calls that the app uses for onnxruntime calls.

To use DirectML with onnxruntime, you will need to follow the Building from source instructions.

@wchao1115
Copy link
Contributor

@SomeAB I'm not super familiar with the app you refer to, but as for ONNX runtime, we work closely with them. If your question is how to get a version of ONNX runtime that works with DirectML, the easiest way currently is to install this nuget package on your Windows PC. This package comes with a version of DirectML that works well with it, but would work on Windows only. Good luck with your experiment. Let us know how it goes.

@alimoezzi
Copy link

@SomeAB I'm not super familiar with the app you refer to, but as for ONNX runtime, we work closely with them. If your question is how to get a version of ONNX runtime that works with DirectML, the easiest way currently is to install this nuget package on your Windows PC. This package comes with a version of DirectML that works well with it, but would work on Windows only. Good luck with your experiment. Let us know how it goes.

It's only useful when you have VS solution not working in with pytorch

@AGenchev
Copy link

AGenchev commented Apr 19, 2021

I read of DirectML today and I must say I'm much impressed by Microsoft for this one. Might make me to boot more often in Windows if this works better than TF on CUDA in Linux. All AMD APU owners probably will do so, because of the non-existing support AMD provides for its APUs (If this works on APUs). Eh, if this was under Vulkan it'd be more portable and we could use on RaspberryPi...

@jstoecker
Copy link
Contributor

There are preview builds of PyTorch-DirectML now: https://devblogs.microsoft.com/windowsai/introducing-pytorch-directml-train-your-machine-learning-models-on-any-gpu/

@6p5ra
Copy link

6p5ra commented Jun 22, 2023

@SomeAB

The app you linked is using PyTorch. We don't have a DirectML backend for PyTorch at the moment, but this is definitely something we could be interested in supporting in the future if there is a demand from the community.

The future never came.

@AGenchev
Copy link

They made it. Nowadays pytorch just works on AMD GPU & APU. I use it on Ryzen 5700G APU.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants