Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cupy.cuda.driver.CUDADriverError: CUDA_ERROR_INVALID_HANDLE: invalid resource handle #8

Closed
syinari0123 opened this issue May 6, 2018 · 4 comments · Fixed by #16
Closed

Comments

@syinari0123
Copy link

Thank you for publishing such a great code !
I have a question.

When I use this Spherical convolution in our network,
I tried to train our model using on multi GPU like torch.nn.DataParallel(model).cuda()
However I got following error message.

  File "/home/users/.python_venv_3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 491, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/users/.python_venv_3/lib/python3.5/site-packages/s2cnn-1.0.0-py3.5.egg/s2cnn/soft/s2_conv.py", line 40, in forward
  File "/home/users/.python_venv_3/lib/python3.5/site-packages/s2cnn-1.0.0-py3.5.egg/s2cnn/soft/gpu/s2_fft.py", line 225, in forward
  File "/home/users/.python_venv_3/lib/python3.5/site-packages/s2cnn-1.0.0-py3.5.egg/s2cnn/soft/gpu/s2_fft.py", line 27, in s2_fft
  File "/home/users/.python_venv_3/lib/python3.5/site-packages/s2cnn-1.0.0-py3.5.egg/s2cnn/soft/gpu/s2_fft.py", line 51, in _s2_fft
  File "cupy/cuda/function.pyx", line 147, in cupy.cuda.function.Function.__call__
  File "cupy/cuda/function.pyx", line 129, in cupy.cuda.function._launch
  File "cupy/cuda/driver.pyx", line 195, in cupy.cuda.driver.launchKernel
  File "cupy/cuda/driver.pyx", line 75, in cupy.cuda.driver.check_status
cupy.cuda.driver.CUDADriverError: CUDA_ERROR_INVALID_HANDLE: invalid resource handle

Can we use DataParallel for this spherical convolution?
Or is there a future plan to implement for this?

@mariogeiger
Copy link
Collaborator

This is probably due to the fact that we use our own cuda kernels and that we compile/execute them using cupy. I'm not fully satisfied by the need of using cupy but I seems to be the easiest way to do.
We never tried to use more than one gpu...
If you find a way to make it working, please share the solution.

@syinari0123
Copy link
Author

Thank you for replying.
I understand it. I’ll share this if I could.

@mariogeiger
Copy link
Collaborator

@syinari0123 did the modifications done by @Archer-Tatsu solved the problem ?

@mariogeiger mariogeiger reopened this Jun 6, 2018
@mariogeiger
Copy link
Collaborator

Since there is no answer I consider this issue closed. Re-open if the PR does not fix the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants