Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Would the ROCm support for windows?Or never? #666

Open
Color-Dark opened this issue Jan 8, 2019 · 132 comments
Open

Would the ROCm support for windows?Or never? #666

Color-Dark opened this issue Jan 8, 2019 · 132 comments

Comments

@Color-Dark
Copy link

Would you have a plan for supporting ROCm in windows platform?

@jlgreathouse
Copy link
Collaborator

Hi @lucifer-morning-star

If you mean the ROCm kernel driver and HSA stack, there are currently no plans to port our Linux driver or full HSA runtime to Windows. The driver itself is highly tied to Linux, and our HSA software stack (Thunk and ROCr) are themselves highly tied to our driver.

We already support OpenCL in Windows through software included with our Catalyst drivers.

Our HIP and HCC compilers/runtimes, and libraries and software built using them (such as rocBLAS, MIOpen, tensorflow and Pytorch built on MIOpen) may technically be possible to port to Windows, but I cannot give any public commitment about when or if AMD will perform these ports.

@musm
Copy link

musm commented Oct 7, 2019

@jlgreathouse given the recent announcements and collaboration between microsoft and AMD on the Surface Laptop and their custom chips. Do you mind providing some insight if we an expect and support in the future? Thanks.

@victoryang00
Copy link

when wsl makes gpu passthrough possible, everything solved

@xsacha
Copy link

xsacha commented Nov 18, 2019

when wsl makes gpu passthrough possible, everything solved

Not at all. There's no way to install drivers on WSL.

@briansp2020
Copy link

Not at all. There's no way to install drivers on WSL.

Won't it change with WSL2? I was thinking that WSL2 and SR-IOV will make it possible to run ROCm on Windoes. Not that I'd be willing to pay for a GPU with SR-IOV...

@wwwguess
Copy link

wwwguess commented Mar 4, 2020

DeepLearning is future. Why doesn't AMD care this. Almost nobody did some studying and working with AMD gpus. My macbook pro 16 can't be used to work on DeepLearning with it's RadeonPro5500M. That's a piece of shit!

@Mandrewoid
Copy link

Mandrewoid commented Mar 4, 2020

DeepLearning is future. Why doesn't AMD care this. Almost nobody did some studying and working with AMD gpus. My macbook pro 16 can't be used to work on DeepLearning with it's RadeonPro5500M. That's a piece of shit!

Edit: I originally thought you meant a 2016 macbook, I have since realized you mean the new 16
inch macbook. Nevertheless, the point I made below stil stands.
My vega 64 has 12Tflops fp32. MPB16 is rated as up to 4Tflops When the laptop gets hot... speed will reduce.

This is not the place for comments like that, but even if it was... your macbook 2016 does not have anywhere NEAR the compute capability of a modern GPU. You would be better off using google colab for free
It has access to GPU's and TPU's, and won't load up your local computer.
Here's a beginner article.
https://towardsdatascience.com/getting-started-with-google-colab-f2fff97f594c

@papadako
Copy link

papadako commented May 3, 2020

Would you have a plan for supporting ROCm in windows platform?

I think this is the first time that I find a correlation between the description of an issue in github and the issue number! :)

@briansp2020
Copy link

Is ROCm coming to Windows through WSL2? MS just announced GPU compute workload support through WSL2. I hope it's not just for NVidia...
https://www.phoronix.com/scan.php?page=news_item&px=Linux-GUI-Apps-GPU-WSL2

@briansp2020
Copy link

It seems WSL2 will support DirectML and CUDA. Will HIP API be ported as well?

I have not dug into DirectML too deep. Is DirectML a machine learning specific API or it is general enough to support other GPGPU applications like CUDA/OpenCL.

So many API for GPU compute. I was hopeful that HIP will serve as a unifying GPGPU development API/language but alas...

@teddy-mindcompass
Copy link

Ok Goodbye, welcome Cuda

@xsacha
Copy link

xsacha commented Nov 20, 2022

Intel supports Windows on their new Arc GPUs and still nothing from AMD.

@teddy-mindcompass
Copy link

teddy-mindcompass commented Nov 20, 2022

Intel supports Windows on their new Arc GPUs and still nothing from AMD.

It can work on windows using Microsoft Antares but i don't have the time for that!!! https://github.com/microsoft/antares

it's a big project to make it works perfectly

@saadrahim saadrahim reopened this Nov 21, 2022
@saadrahim
Copy link
Member

Work on a windows port is well underway.

https://github.com/amd/rocm-examples

You can see signs on many of our repositories.

@xsacha
Copy link

xsacha commented Nov 21, 2022

From that repo: "ROCm toolchain for Windows (No public release yet)"

Still waiting :( There has been talk of ROCm for Windows for roughly 5 years now with small hints like this on AMD repos and docs. It doesn't give me any more hope to see that.

I had fast GPGPU working on Windows with 'CTM' (Close to Metal) / Stream SDK over a decade ago. Then AMD went silent (buggy and slow OpenCL) and I've been forced to use CUDA ever since.

Pretty much overnight, Intel has appeared on the scene and offered cross-platform support without any issues.

@tallesairan
Copy link

this is a shame, amd should follow nvidia's example with cuda, it seems to me that amd is lazy

@lshqqytiger
Copy link

lshqqytiger commented Feb 13, 2023

Good news? It is Coming Soon.

@xsacha
Copy link

xsacha commented Feb 13, 2023

Coming Soon or Coming Soon™️?

We've seen Windows release notes and windows DLL files before.. :/

@Coderx7
Copy link

Coderx7 commented Feb 13, 2023

It's not needed as bad as before anyway, especially when the likes of mlir projects (like torch-mlir)are working very well today.
Though it's still very unfortunate!

@YuriyTigiev
Copy link

We all are waiting a big good surprise from AMD.

@boxerab
Copy link

boxerab commented Feb 26, 2023

probably dropping OpenCL in the windows port ...

@scarsty
Copy link

scarsty commented Feb 27, 2023

mark...

@AlphaJuliettOmega
Copy link

OpenCL being silently dropped when I upgraded from an RX480 + ROCM not even supporting 6xxx cards is very confusing to say the least.

Looking forward to see if ROCm is usable on Windows.

On Linux using ROCm causes driver timeouts, hard crashes, artifacting, complete crash if playing a video while computing etc. (assuming this is why it's not even officially supported)
Hopefully a fresh start on Windows allows a cleaner implementation, best of luck.

@boxerab
Copy link

boxerab commented Feb 28, 2023

OpenCL being silently dropped when I upgraded from an RX480 + ROCM not even supporting 6xxx cards is very confusing to say the least.

Really? That is very disappointing.

Looking forward to see if ROCm is usable on Windows.

On Linux using ROCm causes driver timeouts, hard crashes, artifacting, complete crash if playing a video while computing etc. (assuming this is why it's not even officially supported) Hopefully a fresh start on Windows allows a cleaner implementation, best of luck.

AMD consumer compute is a disaster. The writing is on the wall - OpenCL is not going to be a workable solution for AMD cards going forward. Time to build a HIP backend or switch to CUDA.

@iperov
Copy link

iperov commented Mar 20, 2023

currently you can use directml for tensorflow ~x1.6 slower

dml for pytorch is ~x2.2 slower, and has a lot of bugs.

onnxruntime-directml inference speed is the same as on CUDA

@YuriyTigiev
Copy link

Time to change pc on compatible with CUDA. :(

@ThomazDiniz
Copy link

any update?

@iperov
Copy link

iperov commented Dec 26, 2023

seriously directml for windows has ~90% of CUDA inference speed and works for AMD.
You guys have been waiting all this time for ROCM just for inference?

@nartmada
Copy link
Collaborator

nartmada commented Jan 4, 2024

@xsacha
Copy link

xsacha commented Jan 4, 2024

There is ROCm support for Windows. Thanks.

Interesting docs that seem to equate HIP with ROCm. You notice it says HIP Runtime and HIP SDK. Both of which are required for ROCm but are not ROCm.

Unfortunately everyone on this issue is interested in using ROCm for deep learning / AI frameworks.

The docs even contain a link 'What is ROCm?' where it states:

ROCm is powered by AMD’s Heterogeneous-computing Interface for Portability (HIP)

ROCm is fully integrated into machine learning (ML) frameworks, such as PyTorch and TensorFlow.

From the link:

Component Linux Windows
Debugger rocgdb no debugger available
Communication Libraries Supported Not available
AI Libraries MIOpen, MIGraphX Not Available
AI Frameworks PyTorch, TensorFlow, etc. Not available
CMake HIP Language Enabled Unsupported

Ergo, there is no support for ROCm on Windows still.

@johnnynunez
Copy link

There is ROCm support for Windows. Thanks.

Interesting docs that seem to equate HIP with ROCm. You notice it says HIP Runtime and HIP SDK. Both of which are required for ROCm but are not ROCm.

Unfortunately everyone on this issue is interested in using ROCm for deep learning / AI frameworks.

The docs even contain a link 'What is ROCm?' where it states:

ROCm is powered by AMD’s Heterogeneous-computing Interface for Portability (HIP)

ROCm is fully integrated into machine learning (ML) frameworks, such as PyTorch and TensorFlow.

From the link:

Component Linux Windows
Debugger rocgdb no debugger available
Communication Libraries Supported Not available
AI Libraries MIOpen, MIGraphX Not Available
AI Frameworks PyTorch, TensorFlow, etc. Not available
CMake HIP Language Enabled Unsupported
Ergo, there is no support for ROCm on Windows still.

yeah, you guys mention that. Soon, there will be supported. MiOpen is equivalent to cudnn
https://github.com/ROCm/MIOpen/pulls?q=is%3Aopen+is%3Apr+label%3Awindows

@leo-smi
Copy link

leo-smi commented Jan 8, 2024

it's comming https://rocmdocs.amd.com/en/latest/what-is-rocm.html
image

@javag97
Copy link

javag97 commented Feb 15, 2024

NOTE: ROCm is not available on Windows, why is AMD like this 😢

@radudiaconu0
Copy link

@javag97 yeah instead of full rocm support after years of waiting they gave us something half baked that is basically useless for deep learning. I dont understand why. they knew the community wanted this for deep learning nmot for runnign blender.

@johnnynunez
Copy link

johnnynunez commented Feb 15, 2024

@javag97 yeah instead of full rocm support after years of waiting they gave us something half baked that is basically useless for deep learning. I dont understand why. they knew the community wanted this for deep learning nmot for runnign blender.

If you look at the repositories, MIOpen is already fully compatible with windows. With AMDMiGraphX, they are working on fixing the latest problems. They will probably release support for ROCM 6.1.
https://github.com/ROCm/AMDMIGraphX/pulls?q=is%3Aopen+is%3Apr+label%3AWindows

@radudiaconu0
Copy link

@johnnynunez i really hope so. i need to do my disertation and it is deep learning :)) i hoped we shall have windows support by now. i dont want to dual boot with linux or install ubun 22.04 witch is 2 years old.

@johnnynunez
Copy link

johnnynunez commented Feb 15, 2024

@johnnynunez i really hope so. i need to do my disertation and it is deep learning :)) i hoped we shall have windows support by now. i dont want to dual boot with linux or install ubun 22.04 witch is 2 years old.

It's better linux because it's more stable than windows for blue screen... You can try to install ubuntu 24.04 that is out in April

@radudiaconu0
Copy link

i know..i want to make the switch but i really want to play with rocm till then ;))

@radudiaconu0
Copy link

too sad microsoft forgot about the pytroch directml and tensorflow directml...

@leo-smi
Copy link

leo-smi commented Feb 16, 2024

too sad microsoft forgot about the pytroch directml and tensorflow directml...

For object detection models DirectML it's only available for inference on windows, too sad.

@iperov
Copy link

iperov commented Feb 16, 2024

too sad microsoft forgot about the pytroch directml and tensorflow directml...

For object detection models DirectML it's only available for inference on windows, too sad.

Pytorch-directml works in windows, but still has a lot of bugs

@leo-smi
Copy link

leo-smi commented Feb 17, 2024

too sad microsoft forgot about the pytroch directml and tensorflow directml...

For object detection models DirectML it's only available for inference on windows, too sad.

Pytorch-directml works in windows, but still has a lot of bugs

for object detection?

edit: found something here: https://www.mssqltips.com/sqlservertip/7906/object-detection-machine-learning-algorithm-using-python/

@Spacefish
Copy link

@johnnynunez i really hope so. i need to do my disertation and it is deep learning :)) i hoped we shall have windows support by now. i dont want to dual boot with linux or install ubun 22.04 witch is 2 years old.

Works perfectly fine in the current dec version of Ubuntu 24.04!

But it's not plug and play, at least not with an not officially supported card. I had to build pytorch from source to support my 7800XT using ROCm 6.0.2
I build torch audio and torch vision as well and they both work fine as well.

Tensorflow works as well, but is a little bit more involved. Just use the fork in the AMD repo.

If you want to do any serious ML you won't be happy on windows in the long run. It is and always will be a second class citizen. On green as well as on the red team.
Easiest way if you want an UI, just get a second PC run a headless Linux on it and an jupyterlab server. It will be just like Google colab and you don't crash your UI you are working on right now, when you exceed your available VRAM.

@iperov
Copy link

iperov commented Feb 17, 2024

If you want to do any serious ML you won't be happy on windows in the long run

lmao

@johnnynunez
Copy link

@johnnynunez i really hope so. i need to do my disertation and it is deep learning :)) i hoped we shall have windows support by now. i dont want to dual boot with linux or install ubun 22.04 witch is 2 years old.

Works perfectly fine in the current dec version of Ubuntu 24.04!

But it's not plug and play, at least not with an not officially supported card. I had to build pytorch from source to support my 7800XT using ROCm 6.0.2 I build torch audio and torch vision as well and they both work fine as well.

Tensorflow works as well, but is a little bit more involved. Just use the fork in the AMD repo.

If you want to do any serious ML you won't be happy on windows in the long run. It is and always will be a second class citizen. On green as well as on the red team. Easiest way if you want an UI, just get a second PC run a headless Linux on it and an jupyterlab server. It will be just like Google colab and you don't crash your UI you are working on right now, when you exceed your available VRAM.

yes, I know it. I use from this https://github.com/johnnynunez/rocm_lab updated scripts to rocm 6.0.2

@Fawean
Copy link

Fawean commented Feb 19, 2024

hope we can use true rocm and pytorch on windows in 2024 plz

@lshqqytiger
Copy link

image
We already have..

@hiepxanh
Copy link

Should we use Zulda to replace HIP ?

@lshqqytiger
Copy link

lshqqytiger commented Feb 19, 2024

ZLUDA uses HIP SDK. It cannot replace it.
ZLUDA is CUDA-compatible layer that depends on HIP SDK. We can run PyTorch built with CUDA without recompiling it.

@radudiaconu0
Copy link

yeah but we still cannot train anything

@scarsty
Copy link

scarsty commented Feb 19, 2024

AMD must publish official edition of miopen ASAP. ROCM without miopen is meaningless.

@radudiaconu0
Copy link

AMD must publish official edition of miopen ASAP. ROCM without miopen is meaningless.

That was exactly my point. They released hip sdk without the most important part:))

@javag97
Copy link

javag97 commented Feb 22, 2024

I've taken a complete side mission into Kubuntu 22.04 and Kubuntu itself is great! Trying to get an RX 7900 XTX to work has been nothing short of extreme frustration. I've been splitting time between docs, StackOverflow, github issues, and reddit to put it all together. Bleeding edge is almost always exhausting.

@Coderx7
Copy link

Coderx7 commented Feb 22, 2024

@lshqqytiger how does it fare against native cuda device with similar specs?

@lshqqytiger
Copy link

It uses HIP SDK internally. so performance is almost same with Linux ROCm.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests