-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cuda execution provider is not available #23833
Comments
Can you try our nightly version? |
Recently we added a feature that you can fetch CUDA/CUDNN libs from pip when installing onnxruntime. See #23659 . |
Still not working root@ubuntu:~/kokoro-onnx# sudo apt install nvidia-cudnn nvidia-cuda-toolkit
root@ubuntu:~/kokoro-onnx# uv pip install --pre --index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple/ onnxruntime-gpu
Resolved 9 packages in 1m 13s
⠸ Preparing packages... (0/1)
onnxruntime-gpu ------------------------------ 57.56 MiB/267.80 MiB
root@ubuntu:~/kokoro-onnx# find / -name "libcublas*.so"
/usr/lib/x86_64-linux-gnu/libcublasLt.so
/usr/lib/x86_64-linux-gnu/stubs/libcublasLt.so
/usr/lib/x86_64-linux-gnu/stubs/libcublas.so
/usr/lib/x86_64-linux-gnu/libcublas.so
root@ubuntu:~/kokoro-onnx# LD_LIBRARY_PATH="/usr/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH" LOG_LEVEL=DEBUG uv run examples/with_session.py
Available onnx runtime providers: ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']
Setting threads to CPU cores count: 30
2025-02-27 16:45:02.883638075 [E:onnxruntime:Default, provider_bridge_ort.cc:2022 TryGetProviderInfo_TensorRT] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1695 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_tensorrt.so with error: libcublas.so.12: cannot open shared object file: No such file or directory
*************** EP Error ***************
EP Error /onnxruntime_src/onnxruntime/python/onnxruntime_pybind_state.cc:505 void onnxruntime::python::RegisterTensorRTPluginsAsCustomOps(PySessionOptions&, const onnxruntime::ProviderOptions&) Please install TensorRT libraries as mentioned in the GPU requirements page, make sure they're in the PATH or LD_LIBRARY_PATH, and that your GPU is supported.
when using ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
****************************************
2025-02-27 16:45:02.994227831 [E:onnxruntime:Default, provider_bridge_ort.cc:2036 TryGetProviderInfo_CUDA] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1695 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_cuda.so with error: libcublasLt.so.12: cannot open shared object file: No such file or directory
2025-02-27 16:45:02.994251395 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:994 CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Require cuDNN 9.* and CUDA 12.*. Please install all dependencies as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.
DEBUG [__init__.py:169] Creating audio for 1 batches for 49 phonemes
DEBUG [__init__.py:76] Phonemes: həlˈoʊ. ðɪs ˈɔːdɪˌoʊ dʒˈɛnɚɹˌeɪɾᵻd baɪ kəkˈɔːɹoʊ!
DEBUG [__init__.py:100] Created audio in length of 3.70s for 49 phonemes in 0.81s (RTF: 0.22
DEBUG [__init__.py:180] Created audio in 0.82s
Created audio.wav
|
It was because:
|
I have no idea how to fix it, the docs of onnxruntime are too general. |
Cuda execution provider is not available
Also, I would add warning log if
onnxruntime-gpu
installed but there's no any GPU found!Maybe even GPU vendor detection with instructions for the most common distro (eg. Ubuntu) for how to fix the issue.
The text was updated successfully, but these errors were encountered: