-
Notifications
You must be signed in to change notification settings - Fork 27.5k
Version 1.3 no longer supporting Tesla K40m? #30532
Copy link
Copy link
Open
Labels
module: binariesAnything related to official binaries that we release to usersAnything related to official binaries that we release to usersmodule: cudaRelated to torch.cuda, and CUDA support in generalRelated to torch.cuda, and CUDA support in generalmodule: docsRelated to our documentation, both in docs/ and docblocksRelated to our documentation, both in docs/ and docblockstriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Metadata
Metadata
Assignees
Labels
module: binariesAnything related to official binaries that we release to usersAnything related to official binaries that we release to usersmodule: cudaRelated to torch.cuda, and CUDA support in generalRelated to torch.cuda, and CUDA support in generalmodule: docsRelated to our documentation, both in docs/ and docblocksRelated to our documentation, both in docs/ and docblockstriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
🐛 Bug
I am using a Tesla K40m, installed pytorch 1.3 with conda, using CUDA 10.1
To Reproduce
Steps to reproduce the behavior:
conda install pytorch cudatoolkit -c pytorch.forward()First tried downgrading to cudatoolkit=10.0, that exhibited same issue.
The code will run fine if you repeat steps above but instead
conda install pytorch=1.2 cudatoolkit=10.0 -c pytorch.Expected behavior
If no longer supporting a specific GPU, please bomb out upon load with useful error message.
Environment
Unfort ran your script after I 'fixed' so pytorch version will be 1.2 here - issue encountered with version 1.3.
cc @ezyang @gchanan @zou3519 @jerryzh168 @ngimel