Skip to content

Conversation

jerrymannil
Copy link
Contributor

@jerrymannil jerrymannil commented Aug 12, 2025

  • thread_work_size of 16 is giving better perf with many workloads for MI300X

cherry-pick of ROCm@fb81400

cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd

* thread_work_size of 16 is giving better perf with many workloads

cherry-pick of ROCm@fb81400
Copy link

pytorch-bot bot commented Aug 12, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/160444

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 0a46c2e with merge base 9903ca4 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pytorch-bot pytorch-bot bot added module: rocm AMD GPU support for Pytorch release notes: cuda release notes category labels Aug 12, 2025
@jerrymannil jerrymannil changed the title [ROCm] Set thread_work_size to 16 for vectorized elementwise kernels [ROCm] Set thread_work_size to 16 for vectorized elementwise kernels for MI300X Aug 12, 2025
@jeffdaily jeffdaily added release notes: rocm mandatorylabel ciflow/rocm Trigger "default" config CI on ROCm and removed release notes: cuda release notes category labels Aug 12, 2025
@jerrymannil
Copy link
Contributor Author

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Aug 12, 2025
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

chuanhaozhuge pushed a commit that referenced this pull request Aug 14, 2025
…for MI300X (#160444)

* thread_work_size of 16 is giving better perf with many workloads for MI300X

cherry-pick of ROCm@fb81400

Pull Request resolved: #160444
Approved by: https://github.com/jeffdaily
pruthvistony pushed a commit to ROCm/pytorch that referenced this pull request Aug 15, 2025
…for MI300X (pytorch#160444)

* thread_work_size of 16 is giving better perf with many workloads for MI300X

cherry-pick of fb81400

Pull Request resolved: pytorch#160444
Approved by: https://github.com/jeffdaily
chuanhaozhuge pushed a commit that referenced this pull request Aug 18, 2025
…for MI300X (#160444)

* thread_work_size of 16 is giving better perf with many workloads for MI300X

cherry-pick of ROCm@fb81400

Pull Request resolved: #160444
Approved by: https://github.com/jeffdaily
can-gaa-hou pushed a commit to can-gaa-hou/pytorch that referenced this pull request Aug 22, 2025
…for MI300X (pytorch#160444)

* thread_work_size of 16 is giving better perf with many workloads for MI300X

cherry-pick of ROCm@fb81400

Pull Request resolved: pytorch#160444
Approved by: https://github.com/jeffdaily
@jerrymannil jerrymannil deleted the patch-2 branch August 26, 2025 20:31
jerrymannil added a commit to ROCm/pytorch that referenced this pull request Sep 5, 2025
…for MI300X (pytorch#160444)

* thread_work_size of 16 is giving better perf with many workloads for MI300X

cherry-pick of fb81400

Pull Request resolved: pytorch#160444
Approved by: https://github.com/jeffdaily
markc-614 pushed a commit to markc-614/pytorch that referenced this pull request Sep 17, 2025
…for MI300X (pytorch#160444)

* thread_work_size of 16 is giving better perf with many workloads for MI300X

cherry-pick of ROCm@fb81400

Pull Request resolved: pytorch#160444
Approved by: https://github.com/jeffdaily
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/rocm Trigger "default" config CI on ROCm ciflow/trunk Trigger trunk jobs on your pull request Merged module: rocm AMD GPU support for Pytorch open source release notes: rocm mandatorylabel

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants