-
-
Notifications
You must be signed in to change notification settings - Fork 105
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CUDA not detected on my system #104
Comments
Hi @Leggyweggy So you are running this as part of text-generation-webui and not standalone, correct? I assume you are starting text-genertion-webui with its In summary, Im guessing that torch hasn't being compiled with CUDA on your system when installing text-generation-webui (Ive seen this a few times. Its nothing to do with AllTalk, so Im not quite sure why it happens). Would you be able to provide me a diagnostics log file so I can look at your environment?
I suspect in the diagnostics you will spot the following: If there is not CU118 or CU121 showing on the installed versions of Torch and Torchaudio, then the CUDA versions of pytorch arent installed. But we can look through the diagnostics and confirm. Thanks |
@erew123 here's the log file it created edit: INFO:root:torch Required: >= 2.1.0+cu118 Installed: 2.2.1 looks like your hypothesis was correct, what's the easiest solution? |
Hi @Leggyweggy Yeah that does seem to be your problem! I have no idea if when you installed text-generation-webui if you chose cuda 11.8 or 12.1 so Ill give you both commands and well, you'll have to make a choice. More than likely you'll want 12.1. So the full fix is: Enter the python environment
Or if you did want it with CUDA 11,8
They are about a 2.5GB download, so depending on your internet connection, it can take a while. When its completed, you can either run the diagnostics again to check it shows installed OR type If you are going to use DeepSpeed, you need to install DeepSpeed for the correct version of CUDA you are using. You can do this in the Thanks |
@erew123 |
Great! Glad its sorted. As I mentioned (and Ill put this for anyone else who reads this ticket) I have no idea why some systems aren't installing CUDA as part of the text-generation-webui installation, however, I don't force a CUDA install with AllTalk when installing as part of text-generation-webui as it should already be installed and me over-writing text-generation-webui's setup would potentially damage something (at least at some point) and I'm not wanting to deal with the fallout of that. Ill close off this ticket Thanks |
@erew123 regarding the CUDA installation thing maybe it's due to the way Text Generation WebUI handles it's requirement updates? that's just my speculation though before uninstalling, i double checked that PIP could see torch, torch-audio, and so on, which it could! but as we found out it was the wrong version. what i can tell you is that i originally installed the Text Generation WebUI repository over 11 months ago, and the one click installer might installed the non-CUDA version of torch. i'm not exactly sure how the update script works, but if it's using PIP, maybe PIP still saw torch as up to date(?) |
Its something on the original installation of it I suspect. But if you have a non-CUDA version installed, it wont upgrade to a CUDA version until you clear your PIP cache. Maybe an issue with an older text-gen-webui installer from somewhere back in the past. |
thanks this helped alot 🙏 |
my system:
Windows 10, CUDA 11.8, Python 3.10.13, and a GTX 1050 Ti (slow, i know)
i installed Alltalk_tts as instructed on the main GitHub repo with little to no issues. i got the main repo cloned into the extensions folder and installed the NVIDIA specific requirements with PIP. one error i noticed during the installation process was a dependency confliction with Numpy version 1.22 and 1.24. this probably doesn't effect my issue, but I'm not a developer and figured it wouldn't hurt to mention it :)
the extension launches successfully but automatically launches the voice model in CPU mode. i'm probably just stupid, but neither the LowVram option or Deepspeed seem to be detecting my NVIDIA graphics card. i got DeepSpeed to install properly as well though! it's detected as it should be which is good.
i checked the CUDA_HOME path through text generation web UI and it seemed to have defaulted to Text Generation WebUI's main environment path, so from there tried manually setting it to "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8". this did not change the issue. from there i tried setting it to the bin path within there that has NVCC.exe in it directly with little to no luck.
help would greatly be appreciated!
update:
i tried getting the extension to launch with lowVram and DeepSpeed enabled at the same time and got the error:
"raise RuntimeError("PyTorch version mismatch! DeepSpeed ops were compiled and installed "
RuntimeError: PyTorch version mismatch! DeepSpeed ops were compiled and installed with a different version than what is being used at runtime. Please re-install DeepSpeed or switch torch versions. Install torch version=2.1, Runtime torch version=2.2"
The text was updated successfully, but these errors were encountered: