-
Notifications
You must be signed in to change notification settings - Fork 360
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature]: Wait... Is this real? #2782
Comments
Yes for now gfx1010 cannot be used with ROCM >= 5.3, see issue #2527 for more details |
Yes. Well... I was thinking maybe it is time to actually upgrade my GPU to something newer (and, maybe, supported), especially something with more VRAM. So I was originally planning to buy a new RX7xxx. Then I saw this^^^ and I finally realized: Why would I invest a lot of money to the latest RDNA3 GPU, when it might/will actually get obsoleted, yet again, by AMD in year or two? So I will get stuck in the same situation we are now with RDNA1 and RDNA2 GPUs quite soon? (And the worst thing is that from the AMD side, everything is quiet about this! No "that is not correct" or "it is not working that way" or "we have plans...". No, just silence!) This behaviour is so anti-consumer that I have a hard time to stay polite here! |
Yeah sadly this is the case. As much as I love AMD GPU's their attitude towards consumer products is terrible. I have old Nvidia Maxwell GPU's which are better supported than 2-3 generations old AMD GPU's. Their attitude towards consumer gpu's really needs to change. |
If rocm is an open source driver, then why aren't there any enthusiasts who would add the necessary support? Not long ago I became the owner of an rx 5700 and I am very upset by this situation |
It's a bit different. You can have the last version installed, but pytorch (and i guess every other software wich rely on rocm) must be compiled for rocm <= 5.2. I mean, i get that, Navi1 wasn't never supported officially and works on older pytorch versions just thanks to a workaround.
There are, sometimes. For example, i just found that 5 days ago was merged a PR wich aims to make life easier for using rocBLAS on gfx1010 (ironically, it was a problem i experienced myself) |
Had a 5700XT since years and it worked some time with ROCm but 5.4 broke it, as mentioned. I now upgraded to a 7800XT which seems to work fine with HSA_OVERRIDE_GFX_VERSION=1030, unless you use a lot of VRAM, then the compositor crashes 😅. My two cents about the future: Speculation 1: One reason could be to be able to sell Chips with Integrated RDNA2/3 but without XDNA into low-end laptops with Windows 11, once Microsoft requires a certain AI inference performance on the Platform. If you can get away with less chip area by using the same silicon for the GPU and AI that saves a lot of money and Software Development cost is a one time thing + some maintenance but hardware production cost is per unit sold. Speculation 2: AMDs strategic marketing saw the big influence the green teams platform has in AI and tries to push their own platform that way, but lowering the entry barrier. AI is seen as a big growth market in the next few years, with LLMs and Image Generation networks just getting broad visibility in late 2023. Not sure at all where all of this leads us though.. As recent nets in image generation and LLMs need more VRAM than availiable in most consumer cards. Look at Mixtral 8x7B for example, the 4bit-quantized form needs 23GB VRAM for inference. The float16 version needs more than 90GB for inference, for training you need even more. IMHO local inference will be limited to smaller tasks, like the typical video conferencing things, like noise reduction, background blur, intelligent filters and so on. |
Well, for me a "solution" for local interference with bigger models was llama-cpp with hipblas support, the gguf models can be partially offloaded to the gpu while the remaining datas are on system ram. Anyway... I suggest you to keep an eye on #2527 |
gfx1010 should be working fine after ROCm/Tensile#1862. If you can't wait for AMD to release the new rocBLAS, you may want to ask the ROCm maintainers of your distribution to back port that patch. It should apply cleanly to any ROCm version >= 5.5. |
Well, yes... That PR makes the tensile library to be compiled by default, but this doesn't automatically mean that those will be added to the final packages, since it isn't an officially supported arch. It will probably depend on how each distributions builds his packages. But yes, it most likely will be automatically included. I've just tried to compile the rocBLAS package from ArchLinux, changing a bit the pkgbuild to make it use version 6.0.2 and that patch (it has been merged to the develop branch, but not yet in 6.0.2). it does indeed compile the lib for gfx1010. |
I'm not sure about whether |
gfx1010 can be made to work. It just needs a few more hacks and steps than simply setting |
Thank you very much for your effort! Aynways, I've stopped keeping my breath hoping for a miracle. AMD is a nogo for now and forseeable future for anything else HW accelerated than games. I might return to AMD-related HIP/ML topic in a year or two; maybe there is some progress by then... 😞 |
Closing ticket as this is not an issue. |
Suggestion Description
So... can we just throw away our otherwise functioning NAVI GPUs... because there is, and will not be, any further support?! 😭
Operating System
Ubuntu 22.04
GPU
RX 5600 XT
ROCm Component
Any relatively new (and supported)
The text was updated successfully, but these errors were encountered: