Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AssertionError: CUDA is not available #1

Closed
ChaoYue0307 opened this issue Jun 20, 2018 · 4 comments
Closed

AssertionError: CUDA is not available #1

ChaoYue0307 opened this issue Jun 20, 2018 · 4 comments

Comments

@ChaoYue0307
Copy link

I have cuda version 9.0.176 on my server, but still got the error when running train.py in the title, how to handle that?
Thanks

@yihong-chen
Copy link
Owner

Did you test your environment settings with some pytorch baselines, for example, the MNIST classification ? It seems that the CUDA isn't installed well.

@ChaoYue0307
Copy link
Author

Thanks and I reinstalled pytorch and cuda, the problem is solved.
But new problem comes with
RuntimeError: cuda runtime error (10) : invalid device ordinal at torch/csrc/cuda/Module.cpp:32
do you have any idea about that?

Thanks

@yihong-chen
Copy link
Owner

It seems that you should check your device_id. It shouldn't be larger than the number of available GPUs.

@ChaoYue0307
Copy link
Author

Really thanks a lot, indeed the problem is the device_id, your default is 7 but I have only 4 gpus on my server.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants