-
Notifications
You must be signed in to change notification settings - Fork 400
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError: Not compiled with GPU support #82
Comments
I have the same problem. Did anyone fix it? |
Were you able to solve this issue? I am facing the same problem with pytorch 1.4.0-py3.6_cuda101_cudnn7_0 and torchvision 0.5.0-py36_cu101. It is invoked by _backend.dcn_v2_forward where _backend should be _ext built from make.sh. I'm not sure if _ext refers to this _ext.cp36-win_amd64.pyd file. Not sure how to proceed from here. |
@wenjiey2 @SharifElfouly Hi, I have fixed it. The situation for me is that I was using a virtual env and try to run it in the computing node by submitting a task to the server. But I install the env when using nodes without GPU and get this error. Therefore, I solved it by installing everything, including the virtual env, in the node with GPU and it works. |
@XiaoSanGit ..thank-you can you please explain in detail how to solve this issue? |
I resolved this issue by forcing Lines 34 to 42 in c7f778f
|
@allenwu5 , thanks for posting your solution. I tried to replicate it and understood that the problem is the following (at least for me):
Therefore, if I just force the code to go through that loop (by removing the
|
After some research, I understood that the problem was that I actually did not have CUDA installed. You can find it out by doing: If nothing is returned it means that you did not install CUDA I followed all this: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/ And I installed CUDA with the following official link After what's explained above, I did installed the
Then you can do:
(before doing it you should check that this is the folder in which CUDA has been installed on your machine) I hope this helps someone else in the same situation. |
I am in something deeper, can you help?
With/without the environment active when I type in
Any tips on how can I get the nvidia-cuda-toolkit for version 11.4.? |
I fixed this by reinstalling cuda 11.4 using run file from nvidia. but now I am facing different issues which are reported on the repo. Like import error for _ext. Switching to different issue threads now. |
For orthers coming later, remember to set |
I works for me. thanks |
I get this error when running testcuda.py on Linux server.
I test torch.cuda.available() and get True.
My cuda version: 10.1
My torch version: 1.4
My python version: 3.6.9
It seems built successfully:
copying build/lib.linux-x86_64-3.6/_ext.cpython-36m-x86_64-linux-gnu.so ->
▽
#!/bin/bash
Creating /NAS/home01/tanzhenwei/.pyenv/versions/3.6.9/envs/tzwpy/lib/python3.6/site-packages/DCNv2.egg-link (link to .)
DCNv2 0.1 is already the active version in easy-install.pth
▽
Installed /NAS/project01/rzimmerm_substitles/FairMot_compressing/src/lib/models/networks/DCNv2_new
Processing dependencies for DCNv2==0.1
Finished processing dependencies for DCNv2==0.1
But get error when testing
True /usr/local/cuda
Traceback (most recent call last):
File "/NAS/project01/rzimmerm_substitles/FairMot_compressing/src/lib/models/networks/DCNv2_new/testcuda.py", line 255, in
example_dconv()
File "/NAS/project01/rzimmerm_substitles/FairMot_compressing/src/lib/models/networks/DCNv2_new/testcuda.py", line 175, in example_dconv
output = dcn(input)
File "/NAS/home01/tanzhenwei/.pyenv/versions/tzwpy/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/NAS/project01/rzimmerm_substitles/FairMot_compressing/src/lib/models/networks/DCNv2_new/dcn_v2.py", line 128, in forward
self.deformable_groups)
File "/NAS/project01/rzimmerm_substitles/FairMot_compressing/src/lib/models/networks/DCNv2_new/dcn_v2.py", line 31, in forward
ctx.deformable_groups)
RuntimeError: Not compiled with GPU support (dcn_v2_forward at /NAS/project01/rzimmerm_substitles/FairMot_compressing/src/lib/models/networks/DCNv2_new/src/dcn_v2.h:35)
frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x33 (0x7f5f96ec4193 in /NAS/home01/tanzhenwei/.pyenv/versions/tzwpy/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #1: dcn_v2_forward(at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, at::Tensor const&, int, int, int, int, int, int, int, int, int) + 0x157 (0x7f5f91a755d7 in /NAS/project01/rzimmerm_substitles/FairMot_compressing/src/lib/models/networks/DCNv2_new/_ext.cpython-36m-x86_64-linux-gnu.so)
frame #2: + 0x17504 (0x7f5f91a82504 in /NAS/project01/rzimmerm_substitles/FairMot_compressing/src/lib/models/networks/DCNv2_new/_ext.cpython-36m-x86_64-linux-gnu.so)
...
Could you help me solve this or give some ideas?
The text was updated successfully, but these errors were encountered: