Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: cuda runtime error (38) #5046

Closed
gwliu opened this issue Feb 5, 2018 · 4 comments
Closed

RuntimeError: cuda runtime error (38) #5046

gwliu opened this issue Feb 5, 2018 · 4 comments

Comments

@gwliu
Copy link

gwliu commented Feb 5, 2018

When I tried to call torch.cuda.device_count() or any other torch.cuda functions, the following error arises:

RuntimeError: cuda runtime error (38) : no CUDA-capable device is detected at /pytorch/torch/lib/THC/THCGeneral.c:70

I installed CUDA 8.0 and cuDNN in my ubuntu14.04 before installing pytorch. Checking the installation of CUDA 8.0 gives the right correspondance

CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "GeForce GTX 1050"
** CUDA Driver Version / Runtime Version 9.0 / 8.0**
** CUDA Capability Major/Minor version number: 6.1**
** Total amount of global memory: 1991 MBytes (2087714816 bytes)**
** ( 5) Multiprocessors, (128) CUDA Cores/MP: 640 CUDA Cores**
** GPU Max Clock rate: 1506 MHz (1.51 GHz)**
** Memory Clock rate: 3504 Mhz**
** Memory Bus Width: 128-bit**
** L2 Cache Size: 1048576 bytes**
** Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)**
** Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers**
** Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers**
** Total amount of constant memory: 65536 bytes**
** Total amount of shared memory per block: 49152 bytes**
** Total number of registers available per block: 65536**
** Warp size: 32**
** Maximum number of threads per multiprocessor: 2048**
** Maximum number of threads per block: 1024**
** Max dimension size of a thread block (x,y,z): (1024, 1024, 64)**
** Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)**
** Maximum memory pitch: 2147483647 bytes**
** Texture alignment: 512 bytes**
** Concurrent copy and kernel execution: Yes with 2 copy engine(s)**
** Run time limit on kernels: Yes**
** Integrated GPU sharing Host Memory: No**
** Support host page-locked memory mapping: Yes**
** Alignment requirement for Surfaces: Yes**
** Device has ECC support: Disabled**
** Device supports Unified Addressing (UVA): Yes**
** Device PCI Domain ID / Bus ID / location ID: 0 / 101 / 0**
** Compute Mode:**
** < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >**

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 9.0, CUDA Runtime Version = 8.0, NumDevs = 1, Device0 = GeForce GTX 1050
Result = PASS

Then I pip installed pytorch in Python3.6 by following the official website instructions. In python, 'import torch' works well, but calling any torch.cuda function gives the above runtime error.

Could anyone figure out what might be wrong in my installation?

Thank you!

@apaszke
Copy link
Contributor

apaszke commented Feb 5, 2018

Can you try running nvidia-smi?

@gwliu
Copy link
Author

gwliu commented Feb 7, 2018

Sorry for the late reply. The output of nvidia-smi is attatched below.

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 384.111 Driver Version: 384.111 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 1050 Off | 00000000:65:00.0 On | N/A |
| 0% 51C P0 ERR! / 120W | 105MiB / 1991MiB | 0% Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1430 G /usr/bin/X 103MiB |
+-----------------------------------------------------------------------------+

@gwliu
Copy link
Author

gwliu commented Feb 8, 2018

Problem solved.
I made a very stupid mistake.
There is a line in the head which is
os.environ["CUDA_VISIBLE_DEVICES"] = '3'.
I did not notice it first time I ran the program and got another error. I revised it to os.environ["CUDA_VISIBLE_DEVICES"] = '0' without restarting the kernel. Then l got this error.

Restart the program with os.environ["CUDA_VISIBLE_DEVICES"] = '0'can solve the problem.

@DianeTOY
Copy link

very thank you for your answer,I got the same problem as you,and now I solve it!!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants