Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The detected CUDA version (11.8) mismatches the version that was used to compile PyTorch (12.1). Please make sure to use the same CUDA versions #1453

Closed
sujianwei1 opened this issue Oct 24, 2023 · 19 comments

Comments

@sujianwei1
Copy link

RuntimeError:
The detected CUDA version (11.8) mismatches the version that was used to compile
PyTorch (12.1). Please make sure to use the same CUDA versions.

@sujianwei1
Copy link
Author

my torch version is 2.0.1

@Tan-YiFan
Copy link

If your cuda version is 11.8, this command might help:

pip install torch==2.0.1+cu118 torchvision==0.15.2+cu118 --index-url https://download.pytorch.org/whl/cu118

Ref: https://pytorch.org/get-started/previous-versions/

@128Ghe980
Copy link

same problem,mark

@128Ghe980
Copy link

If your cuda version is 11.8, this command might help:

pip install torch==2.0.1+cu118 torchvision==0.15.2+cu118 --index-url https://download.pytorch.org/whl/cu118

Ref: https://pytorch.org/get-started/previous-versions/

I have a very similar problem:
The detected CUDA version (11.7) mismatches the version that was used to compile
PyTorch (12.1). Please make sure to use the same CUDA versions.
And I use this:
pip install torch==2.0.1+cu117 torchvision==0.15.2+cu117 --index-url https://download.pytorch.org/whl/cu117

but still I recive the same error. Can you help me?

@Tan-YiFan
Copy link

@128Ghe980 Make sure that you run pip install in your conda env.
Try python -c "import torch; print(torch.__version__)". The output should be 2.0.1+cu117

@wangkuiyi
Copy link

wangkuiyi commented Oct 25, 2023

The error message in #1453 (comment) means

  1. You have NVCC version 11.8 installed on your host / conda env.
  2. You are using it to build vLLM's PyTorch extensions as listed in the setup.py file.
  3. You have PyTorch installed on your host / conda env.
  4. The version of NVCC used to build PyTorch is 12.1
  5. You must use NVCC 12.1 to build vLLM's PyTorch extensions

So, @sujianwei1 you may need to upgrade your CUDA / NVCC to version 12.1.

BTW, PyTorch 2.0.1 was built with NVCC 11.7, not 12.1. PyTorch 2.1.0 was built with nVCC 12.1. So I guess you may have installed PyTorch 2.1.0 instead of PyTorch 2.0.1, right?

@sujianwei1
Copy link
Author

modify torch version to 2.0,is ok

@Ikkyu321
Copy link

Ikkyu321 commented Nov 1, 2023

same question with cuda11.8,mark

@juanmf
Copy link

juanmf commented Dec 4, 2023

I'm getting this error in a Docker build, So I don't thick the problem is in my system. Am I wrong?
made this issue for tracking: mistralai/mistral-inference#76

@Tan-YiFan
Copy link

Starting from vLLM v0.2.2, PyTorch v2.1 + CUDA 12.1 is supported. If problems are not solved, it might help to use vLLM >= v0.2.2.

@BramVanroy
Copy link

Unfortunately I am still experiencing this issue. The script keeps telling me that

The detected CUDA version (11.8) mismatches the version that was used to compile
      PyTorch (12.1). Please make sure to use the same CUDA versions.

Even though I have only specifically installed cu118. Even more so, nvcc- --version yields 11.8 and python -c "import torch; print(torch.version.cuda)" also prints 11.8. So I have no idea where this 12.1 is coming from but it seems to have to do with the extensions that it is loading. So I wonder whether there is a mismatch between those that come with the library and PyTorch.

@hsekki
Copy link

hsekki commented Jan 22, 2024

Same problem with @BramVanroy , have you solved the issue? Thanks

@BramVanroy
Copy link

Same problem with @BramVanroy , have you solved the issue? Thanks

Sadly, no.

@OneStepAndTwoSteps
Copy link

Same Problem.

@alik-git
Copy link

Hi guys, not sure this will help, but I was troubleshooting this myself for another repo (GroundingDINO) and managed to solve the issue so just sharing the solution.

I think this error occurs when the pip creates a temporary python environment to build some package, but that temporary environment may fetch a version of pytorch that's compiled with a different version of cuda than your system. To fix this, you can use the command pip install --no-build-isolation package_name, so for example for the GroundingDINO package it would be:

pip install --no-build-isolation -e GroundingDINO

@ibeltagy
Copy link

pip install --no-build-isolation -e . fixed the problem. Thank you, @alik-git

@Taited
Copy link

Taited commented Mar 6, 2024

Hi guys, not sure this will help, but I was troubleshooting this myself for another repo (GroundingDINO) and managed to solve the issue so just sharing the solution.

I think this error occurs when the pip creates a temporary python environment to build some package, but that temporary environment may fetch a version of pytorch that's compiled with a different version of cuda than your system. To fix this, you can use the command pip install --no-build-isolation package_name, so for example for the GroundingDINO package it would be:

pip install --no-build-isolation -e GroundingDINO

Thank you @alik-git ! You saved my day!

@ricardosanunes
Copy link

pip install --no-build-isolation -e GroundingDINO

so do I replace 'GroundingDINO' with 'torch'?
sorry for the question, newbie here

@alik-git
Copy link

alik-git commented Apr 4, 2024

Yes that's right, whatever the name of your package that you want to install is

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests