-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error importing torch on machines without cuda #16
Comments
Hello, the reason is because of the |
Current main branch now has a USE_CUDA flag. |
Thank you for the answer! Possibly I didn't explain myself well, sorry about that! My goal is to have a build that supports CUDA, but doesn't fail to run if CUDA isn't installed. As with the PyTorch provided wheels (e.g., https://download.pytorch.org/whl/cu110/torch-1.7.1%2Bcu110-cp38-cp38-linux_x86_64.whl). With this wheel I can execute PyTorch in CPU, even if I do not have CUDA installed. |
Hello. I will close this issue for now but please feel free to reopen it if any other issues occur. |
Hello!
First of all, thank you for the amazing Dockerfile template to build PyTorch from the source!
I was able to use the wheel produced in environments that have CUDA installed, however, in environments that do not have CUDA I am not able of executing the command 'import torch', saying correctly that I do not have CUDA.
This is a large problem for me, the possibility of importing torch in environments that do not have CUDA. Do you know what is the configuration in the build that is causing this? If I use PyTorch from the wheels published by them, I am able of importing torch.
Thank you again!
The text was updated successfully, but these errors were encountered: