New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Delay loading the cuda library on Windows #37811
Conversation
💊 Build failures summary and remediationsAs of commit 2e9f988 (more details on the Dr. CI page):
🕵️ 2 new failures recognized by patternsThe following build failures do not appear to be due to upstream breakages: pytorch_linux_xenial_py3_6_gcc5_4_build (1/2)Step: "(Optional) Merge target branch" (full log | diagnosis details | 🔁 rerun)
|
Aren't nvrtc designed to dynamically depend on CUDA? |
@malfet Symbols on Linux are resolved lazily by default. That's why importing a cuda version of |
Wish we could test this. Maybe it wouldn't be too hard to do on CircleCI? We do a similar test on Linux, loading a CUDA binary on CPU and then running tests, it has caught bugs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ezyang is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Summary: Targets #37811 (comment). Pull Request resolved: #37904 Differential Revision: D21484360 Pulled By: seemethere fbshipit-source-id: b25cbf35b8432a587bce86815c97ff444cab255c
Summary: so we can import torch compiled with cuda on a CPU-only machine. need tests Pull Request resolved: pytorch#37811 Differential Revision: D21417082 Pulled By: ezyang fbshipit-source-id: 7a521b651bca7cbe38269915bd1d1b1bb756b45b
so we can import torch compiled with cuda on a CPU-only machine.
need tests