New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RX 470,480,570,580 no longer supports OpenCL on ROCm 4.5 #1659
Comments
|
AFAIK, we have not removed any code intentionally. But maybe something changed in the stack and we don't validate gfx8 on ROCm, so it might not be working anymore. Anyhow, I am not closing this ticket right now. Let me wait for some time. |
|
@ROCmSupport the RX 590 was released in 2018. This is way too early to drop support. |
|
Maybe you mean OpenCL. The MIOpen run succeed only with a small patch. |
|
@boxerab I solved this issue by using a docker image of certain version for tensorflow-rocm, when I had tried to use a tensorflow with rx570. Here's my repo with a few little scripts that I used to automate this process: https://github.com/kvirikroma/tensorflow-rocm-legacy |
|
@kvirikroma thank you. I need OpenCL support. That's gone now. |
|
Is it possible that this commit is the cause? |
This looks exactly like where OpenCL was disabled. Is it difficult to |
|
I can have a try do patch ROCclr for OpenCL, but I am not familiar with OpenCL, could you show me a demo for testing? |
|
Here are a few examples: https://rocmdocs.amd.com/en/latest/Programming_Guides/Opencl-programming-guide.html#example |
those examples look good, also there is a |
|
I am afraid of it is not work, for just revert this commits. My card is RX580. |
|
Thanks, so it looks like AMD really did remove support |
|
And intentionally so. |
|
Has anyone tried to set |
|
hmmm, that's an interesting idea |
|
@ROCmSupport can you comment on Polaris support in recently released ROCm 5.0 ? |
|
Dear @ROCmSupport,
Nvidia recently dropped Kepler support in CUDA in June 2021, when Kepler was released 2012 April. That's 9 years of support. RX 470 was released June 2016, so ~5 years. A bit over half as much. People who bought an RX 590 (released in 2018 November, easily on shelves throughout 2019) only got 2 years of support. 2 years of support! I'm teaching GPGPU to physicists at university and we have a BYOD policy (teaching OpenCL, HIP/CUDA, SYCL) and easily the students who suffer the most are those sporting AMD hardware (myself included, running an RX 580 laptop). It's increasingly hard to install and run any of these APIs. Everything stems from the spotty gfx803 support moved to "partial support" (whatever that means) way too early. AMD shouldn't release products they won't support. (FWIW even in my professional capacity it's becoming harder to justify recommending CDNA products, due to all of them being gfx9XYZ variants for such a long time. I can't say with a straight face that MI100s/MI200s will not share the same fate as RX 590s, that MI200 successors won't sport a new ISA and gfx9XYZ will be dropped ever so swiftly, being an ancient ISA flavor. Professional Fiji owners have been burned like this before, not just consumer card owners.) My experience trying to get gfx803 working:
The user experience is bad with all APIs. This doesn't incentivize users to upgrade and buy AMD HW. |
I tried that using a self-compiled ROCm 5.0 stack on Fedora but still no dice (same as @MathiasMagnus). However if someone is collecting patches to re-enable Polaris in ROCm, please let me know. Seems like there is no easy solution but maybe it isn't that hard... |
|
I have OpenCL working in Ubuntu 20.04 with an RX570. Here is what I did :
|
Nice. But I see that only OpenCL 1.2 is supported - used to be 2.x with earlier ROCm versions. Also, do you even need ROCm, as you are getting OpenCL from the amdgpu driver ? |
|
You're right on both points :
|
|
I think I used wrong package, after recompile rocm-opencl-runtime, clinfo can display gfx803 device. BTW, My card is RX580, test on ubuntu-20.04 and ROCm-5.0.0. |
Thanks. If you can document what you did to recompile the runtime, we can try it. |
|
https://github.com/RadeonOpenCompute/ROCm-OpenCL-Runtime#building
|
Did you change any settings or variables before building ? |
Sorry, I haven't tried it in my environment yet. |
|
I had uploaded patched rocm-opencl-runtime package to github. The patch file: https://github.com/xuhuisheng/rocm-build/blob/feature/build/patch/31.rocm-opencl-runtime-rocclr-gfx803-1.patch The build script: https://github.com/xuhuisheng/rocm-build/blob/feature/build/31.rocm-opencl-runtime.sh And please make sure you are testing on ubuntu-20.04 with ROCm-5.0.0 and gfx803 card.
detail
|
|
We have communicated this directly to AMD through other channels, but I can only agree with the comments this is a really bad decision. As a Fortune 500 company bidding for some of the largest supercomputers in the world, I would hope AMD is aware that high performance GPU computing is not merely a matter of what GPUs can run a particular version of a driver, but that it is a landscape of infrastructure including hundreds of packages of software that need to be ported, validated, and debugged. We happen to develop GROMACS, which is a fairly broadly used computational chemistry package, and we're also fortunate enough that we have access to plenty of centers with the latest hardware. However... we also rely on having extensive CI testing with hosts using lower-power-versions of cards (because those hosts need to both have 2 AMD cards, 2 NVIDIA cards, and 2 Intel cards) that we really don't want to keep updating and changing all the time, not to mention that we don't want to have to put 6x 300W cards in any of them. AMD's decision to only formally support the very latest cards drawing hundreds of watts effectively means that you are making it impossible to test things on ROCm version 4 or later, including OpenCL 2.1. It's of course perfectly fine to say that's our problem (which it is :-), but this simply means the latest ROCm releases is no longer a tier-0 tested platform for GROMACS, and if this doesn't change before the end of the year, I see no alternative but to recommend users to go with GPUs from other vendors instead :-/ |
|
hi everyone , i think i have a definitive answer , although AMD isn't communicating with us properly , they seem to have told the fedora team , that they want to support gfx803 , but are too busy on other things , so they dropped it from official to expermintal and no patch is needed. i have tested this on my RX 580 and it work in opencl & hip. i also spoke with the blender team. since they choose to not put gfx803 in their hip targets they said that there is a possibility for them to make it work , EDIT :testing that env var was done in rocm-5.2.1 from repo.radeon.com |
Is this exclusive to Fedora only? Or can we use this on other distros? |
Can be used on all distros , tested on |
The page you linked on Fedora says (I added bold-italic to highlight):
Can anyone confirm this is correct for the latest ROCm releases? EDIT: |
|
I test on ROCm-5.2.3 + RX580, and it works. I think it is an undocumented environment variable to controll whether we can use OPENCL on gfx803. |
Yes, I agree with that. What I think is happening regarding that phrase on the Fedora page, is that Fedora is doing their ROCm builds from 5.2.1-2 onward with that variable defaulting to true (or maybe their build package sets the environment variable when the installation is done?). So, the rest of us non-Fedora users have to continue setting that environment variable. |
|
Is anybody able to test if HIP could be used with this method as well? |
HIP doesn't directly depend on ROCclr , and thus works on gfx803 ootb , and I can run hip code with it , but forcing blender to render on it caused a white render , so there is an issue on the blender side that needs fixing |
|
That's exactly the use case I wanted to test myself, I couldn't even get Blender to detect my RX 580 yet. |
|
@CosmicFusion |
Me and GloriousEggroll tried adding gfx803 to blender 3.4 , it either renders a white image or a red mess , this is blender issue not a hip one as I can compile hip code successfully on my RX 580 |
You think I can use OpenCL 1.2 on FreeBSD 13.1 for my RX-580 GPU? I need to run OpenCV or "Dlib" machine vision frameworks using my GPU for accelerated computations for face recognition. |
|
Wow, I got it to work using https://github.com/rocm-arch/rocm-arch/blob/master/README.md and |
i dont seen this being a valid response from AMD as a consumer, i never, really, never, seen a driver for a series of cards dropping support for a feature that is marketed in the cards on purpose, so really, fix this ASAP. Community are handling your issues for you, on arch polaris works just fine with a patched package from AUR (https://archlinux.org/packages/community-testing/x86_64/rocm-opencl-runtime/) |
You don't need to patch ROCm anymore, ROC_ENABLE_PRE_VEGA=1 should be enough. |
|
@CosmicFusion I attempted blender port as well. I have successfully run the 3.3 ROCm build. But when I used the same patches for 3.4 I got the same thing you said, some white and red mess. |
wait do you mean you were able to successfully render on blender using 3.3? |
@redthing1 what is the hardware that you used? |
Gigabyte RX 580 8GB with only the ROC_ENABLE_PRE_VEGA=1, for the record this still works in ROCm 5.4.1 for OpenCL, but i haven't tested HIP since 5.2.3 and blender 3.4 alpha or beta i don't remember, and that was without the HSA OVERRIDE env, I currently have exams so i can't test EDIT : Ohh you are not talking to me lol |
Well funny enough. He used the exact same hardware as me. Gigabyte RX 580 8GB with only the ROC_ENABLE_PRE_VEGA=1 |
@redthing1 what Linux distribution are you using ? And how do you set ROC_ENABLE_PRE_VEGA=1 ? Do you have to set this variable before you install ? |
Yes |
Great! I tried running install script from |
|
Alright, I admit defeat. Tried to install 5.4.3 on Ubuntu 22 from install script with Fedora 37 works right out of the box, so I will stick with Fedora. Unfortunately, the performance of my cl kernels is 50% of what I used to get with ROCm 3.x |
You need to run this |
Thanks, I will stick with Fedora, as I prefer it to clunky old Ubuntu. I am assuming that perf won't change between the two distributions. |
cf: #1608
And please don't close this issue until we have a clear answer - has polaris support been intentionally dropped from ROCm
after only 6 years, or is this an error ?
The text was updated successfully, but these errors were encountered: