-
Notifications
You must be signed in to change notification settings - Fork 24.5k
Add libtorch nightly build for CUDA 12.8 #146265
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/146265
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (2 Unrelated Failures)As of commit 62ef609 with merge base 16e202a ( FLAKY - The following jobs failed but were likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Test failures in libtorch and manywheel with CUDA Error: Reason is we are removing sm50 and sm60 from 12.8 binary in this PR, to resolve the ld --relink error in #145792 (comment). And current upstream CI test runs on Tesla M60 which is sm_50. Proposing solution: |
Can we not use the linker script with --relink keep the old arch support? |
Hi @Skylion007 , right the --relink would work. Meanwhile, we are actually also deprecating the sm_50,60,70 for cuda 12.8 (they will be deprecated officially in future cuda releases), and this would resolve the build error. |
This will drop support for 1080 and similar consumer chips, right? We are finally starting to drop GPU arches that are commonly used and can run modern architectures in inference. These are very common in university clusters. SM70 only supports GV100s right? Why not support SM60 so torch supports more devices? Is 12.9 dropping all these cuda arches completely? Or is this just to unblock the binary size issues? Seems like there might be a longer term alternative to fixing the 1GB libtorch limit such getting the linker script to work with relink, LTO might reduce binary size sufficiently to save one of the arch's, or just splitting the binaries. |
@Skylion007 Future CUDA versions will drop sm_50-sm_70 completely as @tinglvv explained and CUDA 12.8 deprecated these.
Universities and other users stuck on older GPUs or drivers are still able to use PyTorch binaries built with an older CUDA toolkit (e.g. 12.6.3 or 11.8). We are keeping PyTorch binaries with CUDA 11.8 alive for 2+ years for this reason. |
Fix for the build failures - use g4dn runners for 12.8 binary testing. (sm_75) |
@pytorchbot rebase |
@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here |
Successfully rebased |
0bd6ed0
to
4192809
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
4192809
to
62ef609
Compare
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Try removing sm50 and sm60 to shrink binary size, and resolve the ld --relink error "Architecture support for Maxwell, Pascal, and Volta is considered feature-complete and will be frozen in an upcoming release." from 12.8 release note. Also updating the runner for cuda 12.8 test to g4dn (T4, sm75) due to the drop in sm50/60 support. #145570 Pull Request resolved: #146265 Approved by: https://github.com/atalman
Try removing sm50 and sm60 to shrink binary size, and resolve the ld --relink error "Architecture support for Maxwell, Pascal, and Volta is considered feature-complete and will be frozen in an upcoming release." from 12.8 release note. Also updating the runner for cuda 12.8 test to g4dn (T4, sm75) due to the drop in sm50/60 support. pytorch#145570 Pull Request resolved: pytorch#146265 Approved by: https://github.com/atalman
Try removing sm50 and sm60 to shrink binary size, and resolve the ld --relink error
"Architecture support for Maxwell, Pascal, and Volta is considered feature-complete and will be frozen in an upcoming release." from 12.8 release note.
Also updating the runner for cuda 12.8 test to g4dn (T4, sm75) due to the drop in sm50/60 support.
#145570
cc @atalman @malfet @ptrblck @msaroufim @eqy @nWEIdia