New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Would the ROCm support for windows?Or never? #666
Comments
If you mean the ROCm kernel driver and HSA stack, there are currently no plans to port our Linux driver or full HSA runtime to Windows. The driver itself is highly tied to Linux, and our HSA software stack (Thunk and ROCr) are themselves highly tied to our driver. We already support OpenCL in Windows through software included with our Catalyst drivers. Our HIP and HCC compilers/runtimes, and libraries and software built using them (such as rocBLAS, MIOpen, tensorflow and Pytorch built on MIOpen) may technically be possible to port to Windows, but I cannot give any public commitment about when or if AMD will perform these ports. |
@jlgreathouse given the recent announcements and collaboration between microsoft and AMD on the Surface Laptop and their custom chips. Do you mind providing some insight if we an expect and support in the future? Thanks. |
when wsl makes gpu passthrough possible, everything solved |
Not at all. There's no way to install drivers on WSL. |
Won't it change with WSL2? I was thinking that WSL2 and SR-IOV will make it possible to run ROCm on Windoes. Not that I'd be willing to pay for a GPU with SR-IOV... |
DeepLearning is future. Why doesn't AMD care this. Almost nobody did some studying and working with AMD gpus. My macbook pro 16 can't be used to work on DeepLearning with it's RadeonPro5500M. That's a piece of shit! |
Edit: I originally thought you meant a 2016 macbook, I have since realized you mean the new 16 This is not the place for comments like that, but even if it was... your macbook 2016 does not have anywhere NEAR the compute capability of a modern GPU. You would be better off using google colab for free |
I think this is the first time that I find a correlation between the description of an issue in github and the issue number! :) |
Is ROCm coming to Windows through WSL2? MS just announced GPU compute workload support through WSL2. I hope it's not just for NVidia... |
It seems WSL2 will support DirectML and CUDA. Will HIP API be ported as well? I have not dug into DirectML too deep. Is DirectML a machine learning specific API or it is general enough to support other GPGPU applications like CUDA/OpenCL. So many API for GPU compute. I was hopeful that HIP will serve as a unifying GPGPU development API/language but alas... |
|
Intel supports Windows on their new Arc GPUs and still nothing from AMD. |
It can work on windows using Microsoft Antares but i don't have the time for that!!! https://github.com/microsoft/antares it's a big project to make it works perfectly |
Work on a windows port is well underway. https://github.com/amd/rocm-examples You can see signs on many of our repositories. |
From that repo: "ROCm toolchain for Windows (No public release yet)" Still waiting :( There has been talk of ROCm for Windows for roughly 5 years now with small hints like this on AMD repos and docs. It doesn't give me any more hope to see that. I had fast GPGPU working on Windows with 'CTM' (Close to Metal) / Stream SDK over a decade ago. Then AMD went silent (buggy and slow OpenCL) and I've been forced to use CUDA ever since. Pretty much overnight, Intel has appeared on the scene and offered cross-platform support without any issues. |
this is a shame, amd should follow nvidia's example with cuda, it seems to me that amd is lazy |
Good news? It is Coming Soon. |
Coming Soon or Coming Soon™️? We've seen Windows release notes and windows DLL files before.. :/ |
It's not needed as bad as before anyway, especially when the likes of mlir projects (like torch-mlir)are working very well today. |
We all are waiting a big good surprise from AMD. |
probably dropping OpenCL in the windows port ... |
mark... |
OpenCL being silently dropped when I upgraded from an RX480 + ROCM not even supporting 6xxx cards is very confusing to say the least. Looking forward to see if ROCm is usable on Windows. On Linux using ROCm causes driver timeouts, hard crashes, artifacting, complete crash if playing a video while computing etc. (assuming this is why it's not even officially supported) |
Really? That is very disappointing.
AMD consumer compute is a disaster. The writing is on the wall - OpenCL is not going to be a workable solution for AMD cards going forward. Time to build a HIP backend or switch to CUDA. |
currently you can use directml for tensorflow ~x1.6 slower dml for pytorch is ~x2.2 slower, and has a lot of bugs. onnxruntime-directml inference speed is the same as on CUDA |
Time to change pc on compatible with CUDA. :( |
any update? |
seriously directml for windows has ~90% of CUDA inference speed and works for AMD. |
Hi @Color-Dark, please check https://rocm.docs.amd.com/projects/install-on-windows/en/latest/reference/system-requirements.html and https://rocm.docs.amd.com/en/latest/. There is ROCm support for Windows. Thanks. |
Interesting docs that seem to equate HIP with ROCm. You notice it says HIP Runtime and HIP SDK. Both of which are required for ROCm but are not ROCm. Unfortunately everyone on this issue is interested in using ROCm for deep learning / AI frameworks. The docs even contain a link 'What is ROCm?' where it states:
From the link:
Ergo, there is no support for ROCm on Windows still. |
yeah, you guys mention that. Soon, there will be supported. MiOpen is equivalent to cudnn |
it's comming https://rocmdocs.amd.com/en/latest/what-is-rocm.html |
NOTE: ROCm is not available on Windows, why is AMD like this 😢 |
@javag97 yeah instead of full rocm support after years of waiting they gave us something half baked that is basically useless for deep learning. I dont understand why. they knew the community wanted this for deep learning nmot for runnign blender. |
If you look at the repositories, MIOpen is already fully compatible with windows. With AMDMiGraphX, they are working on fixing the latest problems. They will probably release support for ROCM 6.1. |
@johnnynunez i really hope so. i need to do my disertation and it is deep learning :)) i hoped we shall have windows support by now. i dont want to dual boot with linux or install ubun 22.04 witch is 2 years old. |
It's better linux because it's more stable than windows for blue screen... You can try to install ubuntu 24.04 that is out in April |
i know..i want to make the switch but i really want to play with rocm till then ;)) |
too sad microsoft forgot about the pytroch directml and tensorflow directml... |
For object detection models DirectML it's only available for inference on windows, too sad. |
Pytorch-directml works in windows, but still has a lot of bugs |
for object detection? edit: found something here: https://www.mssqltips.com/sqlservertip/7906/object-detection-machine-learning-algorithm-using-python/ |
Works perfectly fine in the current dec version of Ubuntu 24.04! But it's not plug and play, at least not with an not officially supported card. I had to build pytorch from source to support my 7800XT using ROCm 6.0.2 Tensorflow works as well, but is a little bit more involved. Just use the fork in the AMD repo. If you want to do any serious ML you won't be happy on windows in the long run. It is and always will be a second class citizen. On green as well as on the red team. |
lmao |
yes, I know it. I use from this https://github.com/johnnynunez/rocm_lab updated scripts to rocm 6.0.2 |
hope we can use true rocm and pytorch on windows in 2024 plz |
Should we use Zulda to replace HIP ? |
ZLUDA uses HIP SDK. It cannot replace it. |
yeah but we still cannot train anything |
AMD must publish official edition of miopen ASAP. ROCM without miopen is meaningless. |
That was exactly my point. They released hip sdk without the most important part:)) |
I've taken a complete side mission into Kubuntu 22.04 and Kubuntu itself is great! Trying to get an RX 7900 XTX to work has been nothing short of extreme frustration. I've been splitting time between docs, StackOverflow, github issues, and reddit to put it all together. Bleeding edge is almost always exhausting. |
@lshqqytiger how does it fare against native cuda device with similar specs? |
It uses HIP SDK internally. so performance is almost same with Linux ROCm. |
Would you have a plan for supporting ROCm in windows platform?
The text was updated successfully, but these errors were encountered: