-
Notifications
You must be signed in to change notification settings - Fork 13.9k
HIP: Add RDNA3 WMMA support to MMF #17495
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Key Changes Made:
1. ggml/src/ggml-cuda/common.cuh:
- Extended AMD_WMMA_AVAILABLE macro to include both RDNA3 and RDNA4
- Updated amd_wmma_available() to return true for both architectures
2. ggml/src/ggml-cuda/mma.cuh:
- Tile structures: Added RDNA3-specific tile sizes:
- RDNA4: 4 half2 = 8 FP16 elements (compact layout)
- RDNA3: 8 half2 = 16 FP16 elements (duplicate layout required by hardware)
- MMA operations: Added RDNA3 intrinsics:
- FP16: __builtin_amdgcn_wmma_f32_16x16x16_f16_w32 (no gfx12 suffix)
- BF16: __builtin_amdgcn_wmma_f32_16x16x16_bf16_w32
- Uses halfx16_t/bf16x16_t for RDNA3 vs halfx8_t/bf16x8_t for RDNA4
- Load operations: Added conditional handling for 32-byte RDNA3 tiles using two 16-byte copies
3. ggml/src/ggml-cuda/mmf.cu:
- Updated to use amd_wmma_available() for both RDNA3 and RDNA4
|
gpt-oss would not be using the MMF path (it uses MMQ), you might have some variation in your measurements |
|
@am17an you're right, since we don't have integer WMMA on RDNA3 this should not be using this code path. I might have other commits in my PR that I don't have in my master build, or maybe my changes mess with the FP16 code path. I'll look into using the same master build, and also check with other FP16 models |
…use MMQ with integer WMMA operations (hardware-accelerated)
|
RDNA3 does have |
|
I am currently working on enabling __builtin_amdgcn_wmma_i32_16x16x16_iu8_w32 for RDNA3 with similar fashion as #17156 |
|
@jiachengjason great, looking forward to it :) Maybe you can also help me with this, since it touches on MMQ: I am trying to find the source of the regression of GPT OSS 20b regression. It seems to me that RDNA3 no longer uses MMQ with DP4A instructions for batches < 512, which is the fast path for RDNA3. I'm trying to debug this right in my last commits. |
JohannesGaessler
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- According to the AMD ISA documentation RDNA3 supports integer tensor cores. Please change the comments to say that it's not implemented rather than not supported.
- Please always add a comment for an
#endifto indicate which#if/#ifdefit is closing. - The
get_iandget_jmethods are not going to work correctly if you mirror the data for RDNA3. Please either implement them correctly or replace them withNO_DEVICE_CODEfor RDNA3. - The code in
mma.cuhis currently in a bad state in terms of maintainability and is in dire need of a refactor. However, I consider this to be a job for me, the maintainer, rather than contributors. So no action from your side is necessary, for now it's fine to pile on hacky solutions. I just want to give you a heads-up that the code is subject to change once RDNA3, RDNA4, and CDNA are properly supported and I know what the requirements are.
|
@JohannesGaessler thanks for the feedback. RDNA3 indeed supports INT8 in WMMA, and I'll investigate that. It doesn't support FP8 and the sparse WMMA is also missing. Looking into get_i and get_j. Regarding your new code in #17505 - does it even make sense to investigate this code here more or should I wait for that PR to be merged and then attempt to add this to the new MMA kernel? |
|
RDNA3 support should be largely independent of the changes I'm making to |
Details 1. Separated RDNA3 and RDNA4 integer WMMA implementations in mma.cuh: - RDNA4: Uses __builtin_amdgcn_wmma_i32_16x16x16_iu8_w32_gfx12 with int32x2_t (original path preserved) - RDNA3: Uses __builtin_amdgcn_wmma_i32_16x16x16_iu8_w32 with int32x4_t (new path added) 2. Both architectures now share: - AMD_WMMA_INT_AVAILABLE macro (enables shared optimizations) - Memory layout settings (mmq_x_max=128, granularity=16, nwarps=8) - The tile<16, 4, int> optimizations in mmq.cuh 3. RDNA4-exclusive features remain untouched: - FP8/BF8 WMMA operations - Specific RDNA4 optimizations behind #if defined(RDNA4) guards
|
@JohannesGaessler I've updated the code to make use of int8 wmma. get_i and get_j are working now. endifs commented. @jiachengjason as far as I can tell my changes cover the uses of __builtin_amdgcn_wmma_i32_16x16x16_iu8_w32 that I can see. This also fixes the GPT-OSS 20b regression. Please have a look at this draft. I will wait with getting the conflicts merged since we want to merge #17505 first. |
|
Hi @unverbraucht, I don't believe the implementation for int8 is correct. (by running HIP_VISIBLE_DEVICES=0 ./build/bin/test-backend-ops test -o MUL_MAT > output.txt) The the quantization cases are failing for n>8. I think the mapping for loading the data into the register are incorrect for RDNA3. (load_generic) I have attached the output of backend-ops test |
I think @unverbraucht uses the original tile<I, J, T> for RDNA3 int8 wmma and doesn't deal with data loading well in load_generic, I wonder if it's possible to move matrix A, B out of tile<I, J, T> as it covers both row and col major matrix for AMD. Also, I will suggest to clean up get_i and get_j in tile<I, J, half2> and tile<I, J, nv_bfloat162> as the iteration based on ne will be not correct. My suggestion will be tile<I, J, T, transposed = true> for matrix A, B for RDNA int8 and tile<I, J, T> for matrix C, then load_generic can use ggml_cuda_memcpy_1 with position get_i(0) + get_j(0) for all RDNA wmma layout as they all have continues data, just remove the ugly int64_t copy. |
Based on the work by @zhang-hui-yulo for RDNA4 I attempted to backport the WMMA MMF support to RDNA3. Also ports the RDNA4 WMMA-MMQ improvements by @jiachengjason from PR #17156 to RDNA3.
The differences to RDNA4 are:
The results for granite 1b 400m look great:
EDIT: the performance regression for GPT OSS 20b has been fixed, now we have moderate speed-up:
CC @jiachengjason