Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[GPU] Follow the official naming convention for WMMA attributes. #18147

Merged
merged 2 commits into from
Aug 7, 2024

Conversation

hanhanW
Copy link
Contributor

@hanhanW hanhanW commented Aug 7, 2024

82012e6 missed the WMMA_F32_16x16x16_F16 case. The WMMA_F16_16x16x16_F16 is fine because the input type and output type are all F16.

The revision addresses the failure on main branch: https://github.com/iree-org/iree/actions/runs/10289449633/job/28478608054

The change is generated by the below command.

sed -i "s/WMMA_F16_16x16x16_F32/WMMA_F32_16x16x16_F16/g" **/*.h
sed -i "s/WMMA_F16_16x16x16_F32/WMMA_F32_16x16x16_F16/g" **/*.td
sed -i "s/WMMA_F16_16x16x16_F32/WMMA_F32_16x16x16_F16/g" **/*.cpp
sed -i "s/WMMA_F16_16x16x16_F32/WMMA_F32_16x16x16_F16/g" **/*.mlir
sed -i "s/WMMA_F16_16x16x16_F32/WMMA_F32_16x16x16_F16/g" **/*.py

ci-extra: build_packages,test_amd_mi250,test_amd_mi300,test_amd_w7900,test_nvidia_t4

iree-org@82012e6
missed the `WMMA_F32_16x16x16_F16` case. The `WMMA_F16_16x16x16_F16` is
fine because the input type and output type are all F16.

Signed-off-by: hanhanW <hanhan0912@gmail.com>
Copy link
Member

@ScottTodd ScottTodd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for staying on top of this. LGTM, but let's wait for the full CI runs to complete instead of using auto-merge?

Comment on lines +561 to 562
iree.amdgpu.mma = #iree_gpu.mma_layout<WMMA_F32_16x16x16_F16>
} %A, %B, %C : vector<16x16xf16>, vector<16x16xf16> into vector<16x16xf32>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sneaky flipped order here :O

@hanhanW
Copy link
Contributor Author

hanhanW commented Aug 7, 2024

Thanks for staying on top of this. LGTM, but let's wait for the full CI runs to complete instead of using auto-merge?

The W7900 job is skipped. Let me add ci-extra and re-trigger the flow.
https://github.com/iree-org/iree/actions/runs/10292406112/job/28487138348?pr=18147

Signed-off-by: hanhanW <hanhan0912@gmail.com>
@ScottTodd
Copy link
Member

Thanks for staying on top of this. LGTM, but let's wait for the full CI runs to complete instead of using auto-merge?

The W7900 job is skipped. Let me add ci-extra and re-trigger the flow. https://github.com/iree-org/iree/actions/runs/10292406112/job/28487138348?pr=18147

Ooh, right.

# Due to the instability issues at the current runner,
# only run this test in postsubmit.
# ("test_amd_w7900", AMDGPU_PATHS),

@hanhanW hanhanW merged commit e9e24f8 into iree-org:main Aug 7, 2024
44 checks passed
@hanhanW hanhanW deleted the update-wmma-attr branch August 7, 2024 23:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants