Skip to content

Add support for gfx1153#3306

Merged
mgehre-amd merged 1 commit intodevelopfrom
matthias.gfx1153
Nov 27, 2025
Merged

Add support for gfx1153#3306
mgehre-amd merged 1 commit intodevelopfrom
matthias.gfx1153

Conversation

@mgehre-amd
Copy link
Contributor

@mgehre-amd mgehre-amd commented Nov 26, 2025

Proposed changes

Adding support for gfx1153. This device is really similar to gfx1152 (but less CU), so I added it in the same places where gfx1152 was mentioned.

Checklist

Please put an x into the boxes that apply. You can also fill these out after creating the PR. If you're not sure, please don't hesitate to ask.

  • I have added tests relevant to the introduced functionality, and the unit tests are passing locally
  • I have added the test to REGRESSION_TESTS list defined at the top of CMakeLists.txt in tests/CMakeLists.txt, IF the test takes more than 30 seconds to run.
  • I have added inline documentation which enables the maintainers with understanding the motivation
  • I have removed the stale documentation which is no longer relevant after this pull request
  • (If this change is user-facing) I have added release notes which provide the end users with a brief summary of the improvement from this pull request
  • I have run clang-format on all changed files
  • Any dependent changes have been merged

@illsilin
Copy link
Collaborator

Looks good. Thanks!

@mgehre-amd mgehre-amd merged commit 678298d into develop Nov 27, 2025
21 checks passed
@mgehre-amd mgehre-amd deleted the matthias.gfx1153 branch November 27, 2025 07:48
AviralGoelAMD pushed a commit that referenced this pull request Nov 28, 2025
mgehre-amd added a commit to ROCm/TheRock that referenced this pull request Dec 1, 2025
Add gfx1153 target

## Technical Details

I think this can be merged, but will need
ROCm/rocm-libraries#2655
ROCm/rocm-libraries#2653
ROCm/rocm-libraries#2850
ROCm/composable_kernel#3306
to be useful.

## Test Plan

Run hipBLASslt, hipBLAS, rocBLAS unit tests.
Run llama.cpp.

## Test Result

Unit tests passed.
Have successfully run llama.cpp locally with the PRs above.

## Submission Checklist

- [x] Look over the contributing guidelines at
https://github.com/ROCm/ROCm/blob/develop/CONTRIBUTING.md#pull-requests.
rponnuru5 pushed a commit to ROCm/TheRock that referenced this pull request Dec 9, 2025
Add gfx1153 target

## Technical Details

I think this can be merged, but will need
ROCm/rocm-libraries#2655
ROCm/rocm-libraries#2653
ROCm/rocm-libraries#2850
ROCm/composable_kernel#3306
to be useful.

## Test Plan

Run hipBLASslt, hipBLAS, rocBLAS unit tests.
Run llama.cpp.

## Test Result

Unit tests passed.
Have successfully run llama.cpp locally with the PRs above.

## Submission Checklist

- [x] Look over the contributing guidelines at
https://github.com/ROCm/ROCm/blob/develop/CONTRIBUTING.md#pull-requests.
xinyazhang pushed a commit to ROCm/aotriton that referenced this pull request Dec 12, 2025
# Overview

For the [bring-up effort](ROCm/TheRock#2310)
of gfx115{2,3} archs, one coverage requirement is to have them running
some vLLM models. In order to have a vLLM installation working a few
requirements need to be met:
 - ROCm build with gfx115{2,3} support. 
 - Pytorch installation using previous ROCm build.
- Pytorch uses the
[composable_kernel](https://github.com/ROCm/composable_kernel) library.
So when building pytorch I had to make sure it used a branch containing
ROCm/composable_kernel#3306 so that
composable_kernel also supports gfx115{2,3}.
- Pytorch also make use of **aotriton**. In order to make aotriton work
for my target archs I had to do the changes presented in this PR,
otherwise the build would fail.

After changes were made I could build pytorch, vllm and serve some
models correctly.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants

Comments