-
Notifications
You must be signed in to change notification settings - Fork 21.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError: cuda runtime error (38) #5046
Comments
Can you try running |
Sorry for the late reply. The output of +-----------------------------------------------------------------------------+ +-----------------------------------------------------------------------------+ |
Problem solved. Restart the program with os.environ["CUDA_VISIBLE_DEVICES"] = '0'can solve the problem. |
very thank you for your answer,I got the same problem as you,and now I solve it!!! |
When I tried to call
torch.cuda.device_count()
or any other torch.cuda functions, the following error arises:RuntimeError: cuda runtime error (38) : no CUDA-capable device is detected at /pytorch/torch/lib/THC/THCGeneral.c:70
I installed CUDA 8.0 and cuDNN in my ubuntu14.04 before installing pytorch. Checking the installation of CUDA 8.0 gives the right correspondance
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: "GeForce GTX 1050"
** CUDA Driver Version / Runtime Version 9.0 / 8.0**
** CUDA Capability Major/Minor version number: 6.1**
** Total amount of global memory: 1991 MBytes (2087714816 bytes)**
** ( 5) Multiprocessors, (128) CUDA Cores/MP: 640 CUDA Cores**
** GPU Max Clock rate: 1506 MHz (1.51 GHz)**
** Memory Clock rate: 3504 Mhz**
** Memory Bus Width: 128-bit**
** L2 Cache Size: 1048576 bytes**
** Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)**
** Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers**
** Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers**
** Total amount of constant memory: 65536 bytes**
** Total amount of shared memory per block: 49152 bytes**
** Total number of registers available per block: 65536**
** Warp size: 32**
** Maximum number of threads per multiprocessor: 2048**
** Maximum number of threads per block: 1024**
** Max dimension size of a thread block (x,y,z): (1024, 1024, 64)**
** Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)**
** Maximum memory pitch: 2147483647 bytes**
** Texture alignment: 512 bytes**
** Concurrent copy and kernel execution: Yes with 2 copy engine(s)**
** Run time limit on kernels: Yes**
** Integrated GPU sharing Host Memory: No**
** Support host page-locked memory mapping: Yes**
** Alignment requirement for Surfaces: Yes**
** Device has ECC support: Disabled**
** Device supports Unified Addressing (UVA): Yes**
** Device PCI Domain ID / Bus ID / location ID: 0 / 101 / 0**
** Compute Mode:**
** < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >**
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 9.0, CUDA Runtime Version = 8.0, NumDevs = 1, Device0 = GeForce GTX 1050
Result = PASS
Then I pip installed pytorch in Python3.6 by following the official website instructions. In python, 'import torch' works well, but calling any torch.cuda function gives the above runtime error.
Could anyone figure out what might be wrong in my installation?
Thank you!
The text was updated successfully, but these errors were encountered: