Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Could not load library libcudnn_ops_infer.so.8 #516

Open
Benny739 opened this issue Oct 16, 2023 · 22 comments
Open

Could not load library libcudnn_ops_infer.so.8 #516

Benny739 opened this issue Oct 16, 2023 · 22 comments

Comments

@Benny739
Copy link

Could not load library libcudnn_ops_infer.so.8. Error: libcudnn_ops_infer.so.8: cannot open shared object file: No such file or directory

I'm using "nvidia/cuda:12.2.0-base-ubuntu20.04" image on google cloud with nvidia t4 gpus.

The normal whisper package model works fine on cuda.

@phineas-pta
Copy link

need cuda 11.8

@Purfview
Copy link
Contributor

Purfview commented Oct 16, 2023

Could not load library libcudnn_ops_infer.so.8

You can find cuBLAS and cuDNN libs for Linux there at Releases -> https://github.com/Purfview/whisper-standalone-win

Not tested, report if they work.

@bestasoff
Copy link

Check if your LD_LIBRARY_PATH is specified and points to your cuda location.

@justinthelaw
Copy link

I am also having a similar problem. I tried uninstalling cuda-12.2 and cudaNN-9.x, and installing and pointing at cuda-11.8.0. I also used the pip-based command in the instructions and set my $LD_LIBRARY_PATH in the terminal prior to running the script.

Is it possible there are disconnects either between Jupyter Notebook and the actual virtual environment, or maybe the virtual environment and the base OS?

@bestasoff
Copy link

@justinthelaw try adding path to .../**/torch/lib to the LD_LIBRARY_PATH.

@justinthelaw
Copy link

@bestasoff @Benny739 I was able to fix this particular issue by uninstalling all of the NVIDIA dependencies for cuda12.x, and just reinstalling cuda11.8. Now, I am running into a different problem that I'll discuss in a different issue..

@bakermanbrian
Copy link

Having a similar issue. I'm trying to get Faster Whisper to run off a docker build.

I'm trying to use the docker image:
pytorch/pytorch:2.1.0-cuda11.8-cudnn8-runtime

Unfortunately, getting this libcudnn_ops_infer.so.8 issue as well. Anyone know how I might add the necessary additional libraries? I can't use the official Nvidia one it seems (was too large for my smaller system to handle).

@einarpersson
Copy link

need cuda 11.8

As in "at least 11.8" or "exactly 11.8"? I have CUDA Version: 12.0 installed (in WSL2/Ubuntu) but get this error.

@justinthelaw
Copy link

justinthelaw commented Jan 21, 2024

If you are installing CUDA via pip in a virtual environment (and the same goes for on host, VM, or in a container):

# point to VENV's local CUDA 11.8 python lib
export LD_LIBRARY_PATH=${PWD}/.venv/lib64/python3.11/site-packages/nvidia/cublas/lib:${PWD}/.venv/lib64/python3.11/site-packages/nvidia/cudnn/lib

My previous comment about needing to downgrade my host CUDA toolkit and drivers was wrong. You just need to have a host system with drivers that supports up to or past the CUDA version required by the library.

If you continue to have trouble, please provide the pip dependencies installed in your dev/prod environment, where those deps are located in the environment, and also post the outputs of the following:

nvidia-smi
nvcc --version

@Luca-Pozzi
Copy link

Hi everybody, and thank you for helping me in solving this issue!

Expanding on @justinthelaw comment I have used the following command instead:

path LD_LIBRARY=$LD_LIBRARY_PATH:$HOME/path/to/venv/lib64/python3.x/site-packages/nvidia/cublas/lib:$HOME/path/to/venv/lib64/python3.x/site-packages/nvidia/cudnn/lib

With this you append the paths to the $LD_LIBRARY_PATH rather than overwriting it. In the path, /path/to/venv must be substituted with the actual location (and name) of your virtual environment. The same applies to python3.x, where the x must be substituted with the Python version in use.

As a final comment, export applies only to the terminal in which it is issued. One may consider to append it to the $HOME/.bashrc script to make it persistent.

@fusesid
Copy link

fusesid commented Feb 27, 2024

@justinthelaw I am facing same issue. Here are the answers to the question you asked to answer.

NVIDIA-SMI
Screenshot from 2024-02-27 10-54-30

NVCC --VERSION
Screenshot from 2024-02-27 10-54-43

PIP DEPENDENCIES INSTALLED
packages.txt

LOCATION:
/home/anaconda3/envs/my_env/bin

@SeaDude
Copy link

SeaDude commented Mar 1, 2024

Sweet. I was able to get this working.

I installed the NVIDIA software in the README. That caused issues. Had the same libcudnn_ops_infer.so.8 error as the original poster.

Steps to fix:

  1. Went back to factory GPU driver sudo apt-get purge nvidia-* then sudo apt autoremove then sudo apt install system76-driver-nvidia
    • Your factory driver will likely be different than mine
  2. Ran the alternative instructions in the README (I use Linux)
    • pip install nvidia-cublas-cu11 nvidia-cudnn-cu11
    export LD_LIBRARY_PATH=`python3 -c 'import os; import nvidia.cublas.lib; import nvidia.cudnn.lib; print(os.path.dirname(nvidia.cublas.lib.__file__) + ":" + os.path.dirname(nvidia.cudnn.lib.__file__))'`
    
    • pip install faster-whisper
  3. Tried the quickstart code in the README with the jfk.flak from the test/data directory
    • The first time it ran, it downloaded the model
    • Second time, it transcribed the data
    • The key was setting the LD_LIBRARY_PATH env var
me@me:~/projects/speech$ source .venv/bin/activate
(.venv) me@me:~/projects/speech$ python3 test3.py 
Could not load library libcudnn_ops_infer.so.8. Error: libcudnn_ops_infer.so.8: cannot open shared object file: No such file or directory
Aborted (core dumped)
(.venv) me@me:~/projects/speech$ export LD_LIBRARY_PATH=`python3 -c 'import os; import nvidia.cublas.lib; import nvidia.cudnn.lib; print(os.path.dirname(nvidia.cublas.lib.__file__) + ":" + os.path.dirname(nvidia.cudnn.lib.__file__))'`
(.venv) me@me:~/projects/speech$ python3 test3.py 
Detected language 'en' with probability 0.929688
[0.00s -> 3.00s]  And so my fellow Americans,
[3.00s -> 8.00s]  ask not what your country can do for you,
[8.00s -> 11.00s]  ask what you can do for your country.

Hope this helps someone!

@uumami
Copy link

uumami commented Mar 16, 2024

It happened in the docker-compose / docker. To solve it I had to execute the next command inside the docker:

 export LD_LIBRARY_PATH=/usr/local/lib/python3.9/site-packages/torch/lib:$LD_LIBRARY_PATH

@HsinChiaChen
Copy link

HsinChiaChen commented Mar 30, 2024

Problem:
Could not load library libcudnn_ops_infer.so.8. Error: libcublas.so.11: cannot open shared object file: No such file or directory
Aborted (core dumped)

Use python to check the path of the lib

import os
import nvidia.cublas.lib
import nvidia.cudnn.lib

print(os.path.dirname(nvidia.cublas.lib.__file__) + ":" + os.path.dirname(nvidia.cudnn.lib.__file__))

Add the LD_LIBRARY_PATH variable in bashrc, the content is the data printed by python.

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/usr/.local/lib/python3.10/site-packages/nvidia/cublas/lib:/home/usr/.local/lib/python3.10/site-packages/nvidia/ cudnn/lib

After modification, remember to close the current terminal and reopen a new terminal so that the above configuration will take effect.

@CrazyBunQnQ
Copy link

it work for me:

pip install torch --index-url https://download.pytorch.org/whl/cu121

@disbullief
Copy link

disbullief commented Apr 10, 2024

For everyone that has this issue, what fixed it for me was to include the path to torch too in LD_LIBRARY_PATH.
The docker image I run is pytorch/pytorch:2.0.1-cuda11.7-cudnn8-runtime

The line below adds torch as well as cudnn and cublas to the path.

export LD_LIBRARY_PATH=`python3 -c 'import os; import nvidia.cublas.lib; import nvidia.cudnn.lib; import torch; print(os.path.dirname(nvidia.cublas.lib.__file__) + ":" + os.path.dirname(nvidia.cudnn.lib.__file__) + ":" + os.path.dirname(torch.__file__) +"/lib")'`

@otonoton
Copy link

I do not have an NVIDIA GPU, do not want to use CUDA and cannot install CUDA.

How can I use this program without installing any cuda packages?

@disbullief
Copy link

@otonoton I think that your best bet would be to use Whisper C++

@otonoton
Copy link

I have been using it but I was hoping to use faster-whisper for obvious reasons...

@Prashant446
Copy link

For everyone that has this issue, what fixed it for me was to include the path to torch too in LD_LIBRARY_PATH.

For posterity: if you need to add torch library path, you don't need to add other libraries as a set of cuda libraries are also bundled with it:

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:`python3 -c 'import os; import torch; print(os.path.dirname(torch.__file__) +"/lib")'`

@fedirz
Copy link
Contributor

fedirz commented May 18, 2024

I've had this issue when using 12.4.1-cudnn-devel-ubuntu22.04 in my Dockerfile, switching to nvidia/cuda:12.2.2-cudnn8-runtime-ubuntu22.04 has resolved the issue for me without resorting to the LD_LIBRARY_PATH hackery and pip installing the drivers.

I think the issue with using the latest cuda image is because it ships with cuDNN 9 which according to the README.md isn't supported.

I hope this helps!

Full Dockerfile for context:

FROM nvidia/cuda:12.2.2-cudnn8-runtime-ubuntu22.04
RUN apt-get update && \
    apt-get install -y curl software-properties-common && \
    add-apt-repository ppa:deadsnakes/ppa && \
    apt-get update && \
    DEBIAN_FRONTEND=noninteractive apt-get -y install python3.11 python3.11-distutils && \
    curl -sS https://bootstrap.pypa.io/get-pip.py | python3.11
RUN pip install --no-cache-dir poetry==1.8.2
WORKDIR /root/speaches
COPY pyproject.toml poetry.lock ./
RUN poetry install
COPY ./speaches ./speaches
ENTRYPOINT ["poetry", "run"]
CMD ["uvicorn", "speaches.main:app"]

@storytracer
Copy link

For me installing the cuDNN 8 libraries using sudo apt install libcudnn8 on Ubuntu 22.04 fixed the issue!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests