-
Notifications
You must be signed in to change notification settings - Fork 849
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Could not load library libcudnn_ops_infer.so.8 #516
Comments
need cuda 11.8 |
You can find cuBLAS and cuDNN libs for Linux there at Releases -> https://github.com/Purfview/whisper-standalone-win Not tested, report if they work. |
Check if your |
I am also having a similar problem. I tried uninstalling cuda-12.2 and cudaNN-9.x, and installing and pointing at cuda-11.8.0. I also used the pip-based command in the instructions and set my $LD_LIBRARY_PATH in the terminal prior to running the script. Is it possible there are disconnects either between Jupyter Notebook and the actual virtual environment, or maybe the virtual environment and the base OS? |
@justinthelaw try adding path to |
@bestasoff @Benny739 I was able to fix this particular issue by uninstalling all of the NVIDIA dependencies for cuda12.x, and just reinstalling cuda11.8. Now, I am running into a different problem that I'll discuss in a different issue.. |
Having a similar issue. I'm trying to get Faster Whisper to run off a docker build. I'm trying to use the docker image: Unfortunately, getting this libcudnn_ops_infer.so.8 issue as well. Anyone know how I might add the necessary additional libraries? I can't use the official Nvidia one it seems (was too large for my smaller system to handle). |
As in "at least 11.8" or "exactly 11.8"? I have CUDA Version: 12.0 installed (in WSL2/Ubuntu) but get this error. |
If you are installing CUDA via pip in a virtual environment (and the same goes for on host, VM, or in a container): # point to VENV's local CUDA 11.8 python lib
export LD_LIBRARY_PATH=${PWD}/.venv/lib64/python3.11/site-packages/nvidia/cublas/lib:${PWD}/.venv/lib64/python3.11/site-packages/nvidia/cudnn/lib My previous comment about needing to downgrade my host CUDA toolkit and drivers was wrong. You just need to have a host system with drivers that supports up to or past the CUDA version required by the library. If you continue to have trouble, please provide the pip dependencies installed in your dev/prod environment, where those deps are located in the environment, and also post the outputs of the following: nvidia-smi
nvcc --version |
Hi everybody, and thank you for helping me in solving this issue! Expanding on @justinthelaw comment I have used the following command instead:
With this you append the paths to the As a final comment, |
@justinthelaw I am facing same issue. Here are the answers to the question you asked to answer. PIP DEPENDENCIES INSTALLED LOCATION: |
Sweet. I was able to get this working. I installed the NVIDIA software in the README. That caused issues. Had the same Steps to fix:
Hope this helps someone! |
It happened in the
|
Problem: Use python to check the path of the lib
Add the LD_LIBRARY_PATH variable in bashrc, the content is the data printed by python.
After modification, remember to close the current terminal and reopen a new terminal so that the above configuration will take effect. |
it work for me: pip install torch --index-url https://download.pytorch.org/whl/cu121 |
For everyone that has this issue, what fixed it for me was to include the path to torch too in The line below adds torch as well as cudnn and cublas to the path.
|
I do not have an NVIDIA GPU, do not want to use CUDA and cannot install CUDA. How can I use this program without installing any cuda packages? |
@otonoton I think that your best bet would be to use Whisper C++ |
I have been using it but I was hoping to use faster-whisper for obvious reasons... |
For posterity: if you need to add torch library path, you don't need to add other libraries as a set of cuda libraries are also bundled with it:
|
I've had this issue when using I think the issue with using the latest cuda image is because it ships with I hope this helps! Full
|
For me installing the cuDNN 8 libraries using |
Could not load library libcudnn_ops_infer.so.8. Error: libcudnn_ops_infer.so.8: cannot open shared object file: No such file or directory
I'm using "nvidia/cuda:12.2.0-base-ubuntu20.04" image on google cloud with nvidia t4 gpus.
The normal whisper package model works fine on cuda.
The text was updated successfully, but these errors were encountered: