-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't use CUDAExecutionProvider #2
Comments
Hi @YoadTew! Thank you for using my library. Have you looked at the examples folder? In order to use ONNX together with the GPU, you must run follow code block. !pip install onnxruntime-gpu Check the functionality of the module. import onnxruntime
print(onnxruntime.get_device()) # return "GPU" After these steps, please restart your runtime. I think it can help you. |
Hey @Lednik7, Thank you for responding, I have looked at the exmaples folder and ran all those steps. Running
Does returns "GPU" for me, but still I have the same problem I described earlier. I also restarted my machine to make sure. |
@YoadTew Can I find out what configuration you are working on? In what environment? |
@Lednik7 I'm working with ubuntu 20.04 in a new conda environment with python 3.8. The only packages I installed are the ones required by this repo. Here is the output of !nvidia-smi :
Here is the out of pip freeze:
Do you need anything else? |
@YoadTew Try installing and running the example again with os.environ["CUDA_VISIBLE_DEVICES"] = "0" I want to find out is this a cluster work problem or not |
@Lednik7 It doesn't seem to help. The same problem also happens when I use my own pc with ubuntu 20.04 and a single rtx 3070:
|
@YoadTew Try to run an example of conversion and launch from here https://catboost.ai/en/docs/concepts/apply-onnx-ml together with CUDAExecutionProvider |
@YoadTew Did you manage to start or have problems installing catboost? I asked to run to check if onnxruntime-gpu works |
I am having the same problem Error message:
CUDA versions:
onnx versions:
Verifying onnxruntime can get GPU:
|
Here is what I get when going through the first catboost example:
|
Looking closer at the onnxruntime compatibility, I noticed that onnx 1.10 actually matches with onnxruntime 1.9 (which begs the question: what does onnxruntime 1.10 match?).
This appears to be a simple version mismatch problem. But it seems unexpected that such problems should arise when I installed my packages with |
Thank you @GuillaumeTong for the tests. It turns out now everything works for you? |
@Lednik7 Yes, correct |
for anyone else having a similar issue and using Torch. Ensuring Torch is imported before onnxruntime solved my issue
with:
|
Hey, I'm trying to use the code on GPU and I encountered 2 problems:
pip install git+https://github.com/Lednik7/CLIP-ONNX.git
I got the following error (tried on multiple machines):ERROR: Could not find a version that satisfies the requirement torch==1.10.0+cu111 (from clip-onnx)
I fixed it by installing that version of torch by myself. with
pip install torch==1.10.0+cu111 torchvision==0.11.0+cu111 -f https://download.pytorch.org/whl/torch_stable.html
, and then running the rest of the installation.CPUExecutionProvider
and it worked fine, but when I'm trying to run it on GPU withCUDAExecutionProvider
I get the following error message (again on different machines):2022-01-31 20:57:03.234399301 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:535 CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Please reference https://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements to ensure all dependencies are met.
2022-01-31 20:57:03.872349008 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:535 CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Please reference https://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements to ensure all dependencies are met.
I can't figure out what is the problem. Any help?
The text was updated successfully, but these errors were encountered: