Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError:CUDA error:API call is not supported in the installed CUDA driver #82

Open
LikeLidoA opened this issue Jun 8, 2022 · 1 comment

Comments

@LikeLidoA
Copy link

hello @chenwydj
555~
I stuck in step one.....
My environment : Hardware:A100-40G Nvidia-Driver:450.xxx(forgot the details) CUDA11.0 python3.6.9 cuDNN8.0.4 TensorRT7.2.5.1
Before I run "train_search.py",I have checked all the requirements. The samples of TensorRT run well.
After I run "train_search.py", the "train" is fine,I get a folder named like "search-pretrain-256x512_F12.L16_batch3-20220608-xxxxxx",there are something in it. Also I can see the terminal show all the 20 epoch have finished
But when it comes to "validation",something wrong.
the terminal show "use TensorRT for latency test"
then comes"RuntimeError:CUDA error:API call is not supported in the installed CUDA driver"

Is update the driver the only way I can solve the problem、、、?

@ZhouZhengda
Copy link

Hello,l also encountered this problem. when l loaded my data in dataloader "for i, samples_batch in enumerate(data_loader): ", there will be an error shows"RuntimeError:CUDA error:API call is not supported in the installed CUDA driver".
l think it maybe caused by the use of Multiple Processes in Python, because when l delete the order "torch.multiprocessing.set_start_method('spawn')", and put all the data in cuda when it was in cpu,they no longer report errors.
Hope my answer can help you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants