Skip to content

RuntimeError: CUDA error: initialization error #21092

@Hananel-Hazan

Description

@Hananel-Hazan

🐛 Bug

Trying to make the default tensor location to cuda, torch.utils.data.DataLoader produce RuntimeError: CUDA error: initialization error

To Reproduce

  1. copy MNIST example link.
  2. add the lines:
    device_id = 0
    if use_cuda:
    torch.cuda.set_device(torch.device("cuda:" + str(device_id) if torch.cuda.is_available() else "cpu"))
    torch.set_default_tensor_type('torch.cuda.FloatTensor')

Before (or after) line in the example

Result

cause to a error RuntimeError: CUDA error: initialization error

workaround

In order to solve it kwargs need to be empty
line meet to be kwargs = {}

Expected behavior

default location should not create an error

Environment

Collecting environment information...
PyTorch version: 1.1.0
Is debug build: No
CUDA used to build PyTorch: 9.0.176

OS: Ubuntu 16.04.6 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609
CMake version: version 3.10.0

Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 8.0.61
GPU models and configuration:
GPU 0: GeForce GTX TITAN X
GPU 1: GeForce GTX TITAN X

Nvidia driver version: 410.104
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.6.0.21
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.5.1

Versions of relevant libraries:
[pip3] numpy==1.16.3
[pip3] torch==1.1.0
[pip3] torchvision==0.3.0
[conda] Could not collect

Process finished with exit code 0

Metadata

Metadata

Assignees

No one assigned

    Labels

    module: dataloaderRelated to torch.utils.data.DataLoader and SamplertriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions