Conversation
72 tasks
5020c57 to
20e9b27
Compare
illsilin
approved these changes
Nov 26, 2025
Collaborator
|
Looks good. Thanks! |
AviralGoelAMD
pushed a commit
that referenced
this pull request
Nov 28, 2025
mgehre-amd
added a commit
to ROCm/TheRock
that referenced
this pull request
Dec 1, 2025
Add gfx1153 target ## Technical Details I think this can be merged, but will need ROCm/rocm-libraries#2655 ROCm/rocm-libraries#2653 ROCm/rocm-libraries#2850 ROCm/composable_kernel#3306 to be useful. ## Test Plan Run hipBLASslt, hipBLAS, rocBLAS unit tests. Run llama.cpp. ## Test Result Unit tests passed. Have successfully run llama.cpp locally with the PRs above. ## Submission Checklist - [x] Look over the contributing guidelines at https://github.com/ROCm/ROCm/blob/develop/CONTRIBUTING.md#pull-requests.
rponnuru5
pushed a commit
to ROCm/TheRock
that referenced
this pull request
Dec 9, 2025
Add gfx1153 target ## Technical Details I think this can be merged, but will need ROCm/rocm-libraries#2655 ROCm/rocm-libraries#2653 ROCm/rocm-libraries#2850 ROCm/composable_kernel#3306 to be useful. ## Test Plan Run hipBLASslt, hipBLAS, rocBLAS unit tests. Run llama.cpp. ## Test Result Unit tests passed. Have successfully run llama.cpp locally with the PRs above. ## Submission Checklist - [x] Look over the contributing guidelines at https://github.com/ROCm/ROCm/blob/develop/CONTRIBUTING.md#pull-requests.
1 task
xinyazhang
pushed a commit
to ROCm/aotriton
that referenced
this pull request
Dec 12, 2025
# Overview For the [bring-up effort](ROCm/TheRock#2310) of gfx115{2,3} archs, one coverage requirement is to have them running some vLLM models. In order to have a vLLM installation working a few requirements need to be met: - ROCm build with gfx115{2,3} support. - Pytorch installation using previous ROCm build. - Pytorch uses the [composable_kernel](https://github.com/ROCm/composable_kernel) library. So when building pytorch I had to make sure it used a branch containing ROCm/composable_kernel#3306 so that composable_kernel also supports gfx115{2,3}. - Pytorch also make use of **aotriton**. In order to make aotriton work for my target archs I had to do the changes presented in this PR, otherwise the build would fail. After changes were made I could build pytorch, vllm and serve some models correctly.
1 task
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Proposed changes
Adding support for gfx1153. This device is really similar to gfx1152 (but less CU), so I added it in the same places where gfx1152 was mentioned.
Checklist
Please put an
xinto the boxes that apply. You can also fill these out after creating the PR. If you're not sure, please don't hesitate to ask.clang-formaton all changed files