Skip to content

torch.cuda.is_available() returns False #6041

@tiagoft

Description

@tiagoft

Hello,

I am having trouble using cuda with Pytorch. I am migrating from Theano (maybe this is a problem?). I tried installing pytorch from pip, then uninstalled and tried with conda, and tried compiling from source using python setup install.

It seems that CUDA works everywhere else in the system. I am supposing that this could be a permission issue or an issue with a broken path somewhere? I see that many people are having the same problem; have you been able to solve it somehow?

Thanks!

This is some info about my system and my installation:

  • OS: Ubuntu 16.04
  • PyTorch version: 0.4.0a0+1ab248d (as stated in torch.__version__)
  • How you installed PyTorch (conda, pip, source): from source, although I have tried pip and conda and had the same problem.
  • Python version: 2.7
  • CUDA/cuDNN version: 8.0 / 7.2.1
  • GPU models and configuration: Geforce GTX1070
  • GCC version (if compiling from source): 5.4.1

~$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2016 NVIDIA Corporation
Built on Sun_Sep__4_22:14:01_CDT_2016
Cuda compilation tools, release 8.0, V8.0.44

~$ nvidia-smi
Tue Mar 27 10:59:19 2018
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.42 Driver Version: 390.42 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 1070 Off | 00000000:01:00.0 On | N/A |
| 0% 49C P8 6W / 151W | 266MiB / 8118MiB | 0% Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1452 G /usr/lib/xorg/Xorg 144MiB |
| 0 2751 G compiz 118MiB |
+-----------------------------------------------------------------------------+

~$ python -c 'import torch; print torch.cuda.is_available()'
False

~$ python -c 'import torch; print torch.rand(2,3).cuda()'
THCudaCheck FAIL file=/home/username/pytorch/aten/src/THC/THCGeneral.cpp line=70 error=30 : unknown error
Traceback (most recent call last):
File "", line 1, in
RuntimeError: cuda runtime error (30) : unknown error at /home/username/pytorch/aten/src/THC/THCGeneral.cpp:70

I was under the impression that pytorch was not finding CUDA in runtime, so I tried:
~$ CUDA_HOME="/usr/local/cuda" python -c 'import torch; print torch.cuda.is_available()'
False

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions