-
Notifications
You must be signed in to change notification settings - Fork 385
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Q: What is status of ROCm on RX 5500 XT? #1306
Comments
Duplicate of #887 |
Hi @Doev |
@ROCmSupport If you want to try navi10,
|
Agree @xuhuisheng |
I don't know about RX 5500 XT, but for my RX 5500M(Navi 14), I face this issue in version 3.9 when I want to use TF: |
And which things definitely won't work? |
@da-phil OpenCL that didnt depends on rocm-libs may work. |
Yup, OpenCL works well, it's not the issue. The issue are the deeplearning frameworks which currently don't work. |
@da-phil Following Rigtorp's steps. I am sure that ROCm-3.10.x could compile successly on gfx1010, only rocSPARSE has some dpp-bcast issues, but I think we could follow rocPRIM's resovling way. I think I can share some building scripts for gfx1010, after ROCm-3.10 released. Anyone who interesting in it could have a try. |
Yeah, only compiling ROCm for gfx1010 is not enough, it would be also great if it would also work 😅
This would be awesome, then I'd give it a try too 😄 |
Thanks for the many replies. I give it a try and I have ordered a RX 5500 XT, cause 200€ is not so much and if everything fails I can sell the card. I don't understand why the NAVI architecture is so important, cause I think the RX 5500 XT is from the Vega series. Well I never owned an amd gpu before. |
@Doev try to use PlaidML, it might not give you huge boost but may be some improvement (if works on rx 5500 xt) |
Plaidml will work just with Keras, you should use nGraph(with PlaidML backend) besides it, but these are out-dated, though they said they're working on a new release. |
Yup, OpenCL works well, it's not the issue. The issue are the deeplearning frameworks which currently don't work. |
When can we expect it? |
@da-phil @aliPMPAINT |
@xuhuisheng may be duplicate it in #887 |
@xuhuisheng Thanks. Will test it on Friday. Just one question, with RX 5500M(gfx 1012) I should replace |
must use gfx1012, or there will throw a hipErrorNoBinaryForGpu. you can get the gpu arch name by rocminfo |
Too complicated to do recompiling rocm. cannot be sure if it works. So dont want to show it to who not very interesting. |
@xuhuisheng Thanks for the response, will report back within next Saturday. Much appreciated. |
Hi @aliPMPAINT |
@xuhuisheng Ok, so I tried your guide. Unfortuantely it didn't work out, I either encountered |
So, I tested 5.4.0-56 out. Because 5.4.0-56 isn't compatible with my hardware, I had to "crtl+alt+f2" in order to log in(I can't pass through booting).
And
But they do get recognized on 5.6, I'll attach files. |
@aliPMPAINT And if rocminfo cannot recoganize gfx1012, it means the kernel-driver or thunk-interface or hsa-runtime canot support target device. Now I dont know how to debug that level, so I suggest using version which could run rocminfo normally. |
@xuhuisheng |
Upload some codes for check rocm-libs. |
@xuhuisheng |
@vdrhtc
If you had time, please try compiling pytorch for gfx1012. 😄 BTW, hipSPARSE is an abstract layer for ROCm and cuda, it may be not have to compiling for gfx1012. I will try to find out where is /opt/rocm-3.10. comes from. And Don't suggest to using the latest develop branch. As there maybe unstable functions. |
@vdrhtc Yeah, I had the same issue. |
@xuhuisheng I am also able to train a small feed-forward network with Pytorch on cuda:0 device, so I guess everything is all right... |
@vdrhtc |
@vdrhtc Yeah?
Could you provide the link? I also wanna test it |
@aliPMPAINT Yes, your edit is correct! |
Tried it on my 5700 XT Navi 10. IT WORKS!!! 👍🏻 :) Edit: Among further inspection, it does not really work.. If you look at the loss, it does not improve.. So whatever it computes, it does not seem to be right / no weights are changed :( |
Oh no... Well, it seems like we should wait till ROCm adds official support for Navi series, hopefully. |
@Spacefish |
How do you install torchvision? |
@qyb
I remembered that vision used HIP to compile some cpp sources, but it didnot report hipErrorNoBinaryForGpu, so I am afraid it isn't the point. UPDATE try pytorch-1.7.1(with gfx803) and torchvision-0.8.2(from pypi), the loss of mnist can compute properly. So the torchvision should be not the point. |
If i build from torchvision src, the example main.py throw hipErrorNoBinaryForGpu and crash, even with --no-cuda argument. Now I change back torchvision-pypi |
@qyb It's an one full connection layer net, comes from https://d2l.ai/.
|
|
@qyb Guess it should be the MIOpen issue. 😢 |
I have ran the tests in the MIOpen repo, some of them fail, please see the log attached. |
Any updates on 5500 XT? |
@interpharaohmetric |
I'm too lazy to read all of this, but are you telling me that some old GPUs are (partially) supported, while the new ones are not? Except for the new & pricey ones? |
|
Still waiting for RX 5600 (Navi 10, RDNA 1) support ;( |
any news for this gpu compatibility? |
Here Docker for RX/W5500(M) https://hub.docker.com/r/serhiin/rocm_gfx1012_pytorch |
Hello,
I like to evaluate if ROCm is suitable for deeplearning. What about the RX 5500 XT? Is it possible to use that cheap GPU for doing so?
The text was updated successfully, but these errors were encountered: