-
Notifications
You must be signed in to change notification settings - Fork 208
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cutorch fails on getDeviceCount() #782
Comments
Please do not post the same question at multiple places. It is useless and creates a lot of noise. cutorch/cunn only supports nvidia CUDA devices as explained here |
Thanks for your comment.
Sure. Can I avoid crash when importing as a part of a bigger project, sometimes running with non-supported devices? |
Do not require it? You won't be able to use anything from it anyway. |
I would use other parts of the enclosing project. Do you mean it should fail in this situation by design? I suggest some other control path. |
Any workaround ?... |
Make sure the library does not require cutorch, cunn or cudnn when |
The problem is that I need those requirements. If I can do without them finally, there will be no issue (I don't like to spend your time). This is the example code. It converts GPU model to CPU model so that I could run it without NVidia GPU.
As you can see the task doesn't require the hardware (does it?). At the same time it will fail on
|
The thing is that if you have a GPU model (with modules built to run on GPU and parameters in CUDA tensors), you need cutorch and cunn to load them. Unfortunately there is no workaround: you need a CUDA device to load a torch GPU model. |
Thanks a lot, this is much helpful. |
Installed torch, cuda.
test.py
Output:
Related code:
Maybe the problem relates to the fact that my GPU is not NVidia, but cutorch is needed as a part of 3-rd party project which I'm trying to use
The text was updated successfully, but these errors were encountered: