Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BACKEND][AMDGPU] Use full-vectorized load instructions for load vectorization #3609

Merged
merged 1 commit into from
Apr 9, 2024

Conversation

htyu
Copy link
Collaborator

@htyu htyu commented Apr 8, 2024

Current implementation for load vectorization uses segmented short-vectorized loads instead of a full 128-bit load. Using multiple copies of shorter load creates a dependency on the LLVM backend (esp. the load and store vectorizer) for full vectorization. This could be fragile as I saw in some cases the vector combine pass and the jump threading pass screwed it up and resulted in non-ideal vectorization

This is a backport of ROCm#445

@htyu htyu requested a review from ptillet as a code owner April 8, 2024 22:53
@htyu htyu requested a review from zhanglx13 April 8, 2024 22:53
…lang#445)

* Stablize load vectorization

* fix test failures

* Shared one mask check when decomposing a load

* Revert "fix test failures"

This reverts commit 75a461a.

* Emit vectorized loads

* Fix test failures due to using vectorized load
@zhanglx13 zhanglx13 requested a review from zahimoud April 9, 2024 02:00
@zahimoud zahimoud merged commit 29b2fbe into triton-lang:main Apr 9, 2024
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants