Replies: 116 comments 132 replies
-
Honestly, anything that has 16GB VRAM or more (or the ability to have reserved more, for eg. the iGPUs like 680/780/890M and Strix Halo iGPUs). |
Beta Was this translation helpful? Give feedback.
-
I would like support for ROCm to be restored to all the relatively recent GPUs (last 5-6 years) AMD has released and then dropped ROCm support for. New I could not care much about. Actually supporting the AMD cards I bought in the past would be great. |
Beta Was this translation helpful? Give feedback.
-
I think it might be interesting to share here that Debian has built a CI at ci.rocm.debian.net where the ROCm stack, and any package that depends on it, is continuously tested. Our CI includes all of the architectures listed above. We would be happy to cooperate on increasing device support for Debian and derivatives. |
Beta Was this translation helpful? Give feedback.
-
This is not a "device" support wish, but a "platform" one. Stable Diffusion on native Windows with AMD GPUs is not possible until we get "Windows" support for "AI Libraries" (specifically MIOpen) here: https://rocm.docs.amd.com/projects/install-on-windows/en/develop/reference/component-support.html This is required to get PyTorch working. I've seen so many AMD users in recent times selling their AMD GPUs and buying "the competition" because WSL and ZLUDA are their only options, and those are half-baked solutions. Native Windows support should be a top priority. |
Beta Was this translation helpful? Give feedback.
-
A bit older than the ones listed there, but I own a 5700XT, and a good few other people do too, from my extensive looking for how to get it to work online. Still holding on to the precompiled wheel for torch 1.13 ROCM 5.2 for Python 3.10 which is the last one that works (after setting HSA_OVERRIDE_GFX_VERSION). Later versions seem to either outright crash, or import correctly but then crash when a tensor is sent to the GPU. Using this older version as a workaround was doable back when torch 2.0 was new, but now as most new code has already been using 2.0+ for a while, it's effectively not functional at all anymore for any recently written code. |
Beta Was this translation helpful? Give feedback.
-
Considering my GPU( 6600 XT) was released near the end of 2021 it would be nice to know that I don't need to buy a new GPU every year just to have support. It would also be nice to have actual proper Windows support instead of having to deal with the clusterfuck that is Zluda, or other translation layers. This kind of treatment from AMD is why I'll probably go nvidia the next time my budget allows it. |
Beta Was this translation helpful? Give feedback.
-
APU support opens the door for introducing this software to a wide audience, please consider hitting the entire APU line (3 and 3.5.) Early ROCm worked for 780m and got me in the front door of working with this software at all (that said I had to use env var hacks to get it functioning). Later versions of ROCm stopped working at all. The hobbyist crowd would greatly benefit from APU support, which hopefully has the AMD financial incentive of market share and product familiarity (hobbyist engineers who do something neat at home and then bring the concepts to work, where you then pick up the larger purchases) If I was able to feel confident in better consumer ROCm support I would have gladly dropped money for 2 AMD graphics cards for the LLM stuff I do. |
Beta Was this translation helpful? Give feedback.
-
why not just all, like the other company? ;-) |
Beta Was this translation helpful? Give feedback.
-
ROCm windows, All RDNA3 and newer. Don't forget integrated GPUs. Maybe next year? |
Beta Was this translation helpful? Give feedback.
-
I wish for AMD to look back at the RX 500 and the RX5000 series. And the reason being, the physical architectures for both lend themselves to really interesting compute because the RX 580 architecturally is very good to use as a modular scale up and scale down at 75 w. And based based on some back of the napkin maths that I've done an RX 580 8 gig with a 8 billion parameter model with a quant size of eight. Can pull about 15 to 30 tokens per second. |
Beta Was this translation helpful? Give feedback.
-
Hi, So you should start by supporting strix halo and RDNA 4. Then RDNA 3 and prior APUs. |
Beta Was this translation helpful? Give feedback.
-
Thank you for reaching out and at least trying to extend the device support. The limited consumer hardware support has always been one of the weakest point of ROCm, and if AMD is serious about the future of ROCm, at least any upcoming hardware should be supported. Being able to get used to a platform without spending 1000s is actually huge. |
Beta Was this translation helpful? Give feedback.
-
At one time Kaveri was promoted as a hybrid processor, and while HSA was being implemented its support disappeared. It would be fair, given the promises of marketers, to make HSA + ROCm for APU Kaveri. |
Beta Was this translation helpful? Give feedback.
-
Missing poll option: actually support the ✅ marked devices consistently There's not much point having a green icon in the support matrix if it doesn't mean your device is supported. |
Beta Was this translation helpful? Give feedback.
-
We would like to hear from the community what other cards you would like to see ROCm support for. Currently compatibility matrix for Linux is at https://rocm.docs.amd.com/projects/install-on-linux/en/latest/reference/system-requirements.html and Windows is at https://rocm.docs.amd.com/projects/install-on-windows/en/latest/reference/system-requirements.html.
No guarantees of future support but we will try hard to add support.
ROCm on Linux
AMD Instinct
AMD Radeon PRO
AMD Radeon
HIP Runtime and SDK on Windows
AMD Radeon PRO
AMD Radeon
1k votes ·
Beta Was this translation helpful? Give feedback.
All reactions