Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Embedding layer does not check input range #19120

Open
PavelOstyakov opened this issue Apr 10, 2019 · 1 comment
Open

Embedding layer does not check input range #19120

PavelOstyakov opened this issue Apr 10, 2019 · 1 comment
Labels
module: cuda Related to torch.cuda, and CUDA support in general module: error checking Bugs related to incorrect/lacking error checking triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@PavelOstyakov
Copy link

馃悰 Bug

Embedding layer does not check input range when running on cuda

To Reproduce

Code to reproduce the behavior:

import torch
import torch.nn as nn

emb = nn.Embedding(10, 20).cuda()
x = torch.zeros(5, 1).long() + 10
x = x.cuda()

y = emb(x)

z = torch.zeros(10).cuda()
RuntimeError                              Traceback (most recent call last)
<ipython-input-1-758fb390ae81> in <module>()
      8 y = emb(x)
      9 
---> 10 z = torch.zeros(10).cuda()

RuntimeError: CUDA error: device-side assert triggered

Expected behavior

It should raise a correct error

Environment

PyTorch version: 1.0.1.post2
Is debug build: No
CUDA used to build PyTorch: 9.0.176

OS: Ubuntu 18.04.1 LTS
GCC version: (Ubuntu 7.3.0-27ubuntu1~18.04) 7.3.0
CMake version: Could not collect

Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 9.1.85
GPU models and configuration:
GPU 0: GeForce GTX 1080 Ti
GPU 1: GeForce GTX 1080 Ti

Nvidia driver version: 390.116
cuDNN version: Could not collect

Versions of relevant libraries:
[pip3] numpy==1.16.2
[pip3] torch==1.0.1.post2
[pip3] torchvision==0.2.2.post3
[conda] Could not collect

@colesbury
Copy link
Member

We're unlikely to be able to check the index bounds as a recoverable Python error due to performance implications. But we can do a few things:

  1. Improve the error message (improved assert message in the case of "CUDA error: device-side assert triggered"聽#17425)
  2. Add "index out of bounds" to the assertion like in Tensor indexing
  3. Document the behavior in https://pytorch.org/docs/stable/nn.html?highlight=embedding#torch.nn.Embedding

@gchanan gchanan added module: cuda Related to torch.cuda, and CUDA support in general module: error checking Bugs related to incorrect/lacking error checking triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Apr 10, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: cuda Related to torch.cuda, and CUDA support in general module: error checking Bugs related to incorrect/lacking error checking triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

3 participants