New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable gfx1010 by default in rocSPARSE builds #348
Comments
@ulyssesrr Thanks for the suggestion. Let me talk to the team and I will get back to on whether we can add this additional target. |
Adding the gfx1010 target would also enable users to run rocsparse on gfx1011, gfx1012 and gfx1013 using |
Hello @ulyssesrr. As gfx1010 is not fully supported, we will not add it as a build target to the official distribution. However, you may build for gfx1010 from source, and if you encounter any issues, please reach out for help on this repository. |
@doctorcolinsmith @jsandham Hello, I have a question. What exactly not fully supported for now? |
If rocSPARSE is built for gfx1010, it will work on gfx1010. The ROCm binary packages are not built for gfx1010, but you can build rocSPARSE from source or install a third-party package that is built for your architecture (such as the librocsparse0 package on Ubuntu 23.10 or Debian Trixie). The thread you linked is just saying that rocBLAS now works with Tensile lazy loading and separate architectures on gfx1010 if rocBLAS has been built for gfx1010. |
Although, I suppose it's relevant to note that rocBLAS has historically been built for gfx1010 in AMD official releases. The gfx1010 target was enabled in the same commit that enabled gfx1030. The rocSOLVER library matched rocBLAS soon after, but to my knowledge, no other libraries have been built for gfx1010 in AMD's binary packages. |
@cgmb Hello, I mean enable gfx1010 by default to allow use prebuilt rocm and pytorch without recompiling things that should works out of box usually. I mean that for example Nvidia allow to use older cards why AMD does not (even in case when it technically possible, even with status unofficially supported)? I would like to get working Pytorch/Tensorflow out of box for gfx1010 without recompiling (Recompiling usually require 26-30Gb of RAM and spend few hours just to get pytorch working) *Out of box I mean Docker rocm/pytorch or tensorflow |
@serhii-nakon To confirm, gfx1010 is not officially supported in the shipped ROCm packages. It will likely be removed from rocBLAS and rocSOLVER in an upcoming release. |
@doctorcolinsmith But why? It even more worst than before... I mean that currently I can rebuild only some parts of rocm but you will remove it fully... Can you explain why gfx1010 can not be provided at the same way as gfx1030? |
@doctorcolinsmith I already has answer for those questions ROCm/ROCm#1735 (comment) |
https://github.com/ROCmSoftwarePlatform/rocSPARSE/blob/a1c59eaacdc8c4e31923321bfa3242fb265f6949/CMakeLists.txt#L155
https://github.com/ROCmSoftwarePlatform/rocSPARSE/blob/a1c59eaacdc8c4e31923321bfa3242fb265f6949/CMakeLists.txt#L160
What is the expected behavior
I am aware that gfx1010 is not officially supported, yet I believe that some of the architectures quoted above(ie: gfx803) are not supported either, additionally gfx1010 is enabled by default on rocBLAS. I'm building rocSPARSE for gfx1010 successfully on both Arch Linux and Ubuntu, it works, so ideally gfx1010 could be added to the default targets in the same unsupported way. This would hopefully cascade to downstream distros and lower ROCm friction.
What actually happens
gfx1010 is not built by default 🙁
How to reproduce
Standard build.
Environment
The text was updated successfully, but these errors were encountered: