Skip to content

Conversation

nilreml
Copy link
Contributor

@nilreml nilreml commented Apr 1, 2024

Lower the minimum number of CUDA SMs required for GEMM autotuning from V100 to 3080 level, allowing some high-end consumer GPUs to benefit as well.

Fixes #109489

cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov @ColinPeppler @amjames @desertfire @chauhang @jansel @jon-chuang

Copy link

pytorch-bot bot commented Apr 1, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/123121

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (2 Unrelated Failures)

As of commit 901b3c6 with merge base 76a87e3 (image):

BROKEN TRUNK - The following jobs failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

Copy link

linux-foundation-easycla bot commented Apr 1, 2024

CLA Signed


The committers listed above are authorized under a signed CLA.

@jansel jansel added the topic: not user facing topic category label Apr 1, 2024
@jansel
Copy link
Contributor

jansel commented Apr 1, 2024

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Apr 1, 2024
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: [Errno 104] Connection reset by peer

Details for Dev Infra team Raised by workflow job

@jansel
Copy link
Contributor

jansel commented Apr 1, 2024

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

sanketpurandare pushed a commit to sanketpurandare/pytorch that referenced this pull request Apr 22, 2024
…h#123121)

Lower the minimum number of CUDA SMs required for GEMM autotuning from V100 to 3080 level, allowing some high-end consumer GPUs to benefit as well.

Fixes pytorch#109489

Pull Request resolved: pytorch#123121
Approved by: https://github.com/jansel
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Investigate Strictness of torch.compile is_big_gpu

4 participants