Replies: 20 comments 76 replies
-
Please support the SteamOS on the Steam Deck and it's GPU: AMD Custom GPU 0405 |
Beta Was this translation helpful? Give feedback.
-
Gentoo compatibility, If it can be built on gentoo then it will work well for anyone who thinks they know what ROCm is but actually they don't. Rust for example, does not easily build on Gentoo but it does build. It would also be a cool feature for the release cycle to be cutting edge exclusive to the intended platform, INSTINCT, etc., with a second LTS for a "tock" cycle of consumer GPUs. People are practically begging for this use case, and it can be used to purge held stock. |
Beta Was this translation helpful? Give feedback.
-
Umm here's two:
OK rant over; keep innovating AMD 👍 |
Beta Was this translation helpful? Give feedback.
-
Hey AMD, There are so many people eagering to have ROCm work on their card and finally find that the only reasonable solution is to turn to the team green. Don't fail us again. |
Beta Was this translation helpful? Give feedback.
-
Will there be a Windows 10 release any time? i understand that it is a whole new OS to make for but is it even in mind? i can't find any note or mention of windows os for ROCm support. |
Beta Was this translation helpful? Give feedback.
-
highly interested out of the box support in ROCM support for arch linux for prosumers. I want to play and be productive at a reasonable price basically want rocm to work as seamlessly as mesa does these days for AMD on linux. (fine with an additional package install; but it just 'needs to work') |
Beta Was this translation helpful? Give feedback.
-
现在买7900xt是不合时宜的,买了这个卡,两个多星期了,Ubuntu都重装了7次了,试过各类方法,还是无法运行stable-diffusion,甚至Linux游戏也无法玩,显卡无法被游戏识别,等AMD更新还不知道猴年马月。至少目前来看还不如上代的6600xt |
Beta Was this translation helpful? Give feedback.
-
In #994 @abuccts requests a container runtime for docker/kubernetes to allow specifying accessible GPUs by environment variables. To expand further with some personal points:
|
Beta Was this translation helpful? Give feedback.
-
If windows support is not on the plan, how about WSL2 ? |
Beta Was this translation helpful? Give feedback.
-
Please add support for OpenMP offload and HIP in the same translation unit. It is needed for templated C++ codes that could use both OpenMP offload and HIP, for instance depending on downstream users' preferences to enable interoperability of the two strategies. The use of templates makes it impossible/impractical to factor things into separately compiled translation units and I would suspect that the majority of the technological underpinnings to do this are already available, given that OpenMP offload has the capability to compile and launch HIP kernels and device functions. NVIDIA's HPC compiler currently supports this feature, would be great if AMD did as well. Discussed in #2137 |
Beta Was this translation helpful? Give feedback.
-
I'm using ROCm/HIP mostly for rendering in Blender but I'm also experimenting with Stable Diffusion (inference). All under Linux.
AD1: I'm not going to buy any AMD GPU unless it has official ROCm support for the parts/libs that are required by the tools that I use. Period. I'm still on Radeon VII btw. |
Beta Was this translation helpful? Give feedback.
-
@dbenedb @saadrahim Hopefully they speed up to all official RDNA support and RDNA4 will be supported by ROCm from day one. At the hardware level, unless they also bring systolic arrays (matrix cores in CDNA / tensor cores in Nvidia) in RDNA4. On the other hand I think that AMD should push to bring zen4/5 + rdna4 consoles with systolic arrays and infinity cache for 2024/25 (ps5 pro or similar), it would not only be beneficial for AI algorithms on consoles (like behavior of bots, FSR3 etc) because AMD is not associated much with AI, so at the marketing level where millions of players move it would be a masterpiece. On the other hand I think native windows support is essential but I think right now it's optional, most of the people who work for AI use Linux. Another important feature would be direct storage in deep learning or similar, or even compression and decompression of data in real time for DL in ROCm like similar in consoles. In terms of drivers in Linux, I hope that the idea of being able to OC and undervolt both cores and memory to get the most out of each chip is maintained. |
Beta Was this translation helpful? Give feedback.
-
This is more of a hardware/driver request. Is it possible for ROCm to create an FPGA integration similar to Optane? Modern consumer motherboards for custom GPGPU workloads can have a glut of M.2 slots. Is it possible to have an M.2 FPGA for Instinct which can convert Crossfire capable GPUs into ROCm compatible systems? |
Beta Was this translation helpful? Give feedback.
-
Using 7800xt and 23.11.1 on windows 10, and the latest hip rocm from the amd, it is not working at all. Nothing is able to detect the lib, it drops to OpenCL or CPU computing. |
Beta Was this translation helpful? Give feedback.
-
Are there any plan for support in the driver for QoS or resource isolation between containers? |
Beta Was this translation helpful? Give feedback.
-
On NVIDIA cards that are Volta or newer FP16 arithmetic instead of FP32 arithmetic is used. This is both faster and reduces the amount of shared memory required. |
Beta Was this translation helpful? Give feedback.
-
All 3 links are broken, please fix it @saadrahim |
Beta Was this translation helpful? Give feedback.
-
Dear friends, I'm going to use CK's API to write a gemm.cpp program for matrix multiplication. How should I implement this task? |
Beta Was this translation helpful? Give feedback.
-
Please rocm support for rx7700xt on Ubuntu 🙏🙏🙏 |
Beta Was this translation helpful? Give feedback.
-
No promises but let us know what you want here. New OS support, features, and anything else is helpful.
Beta Was this translation helpful? Give feedback.
All reactions