-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
User provided device_type of 'cuda', but CUDA is not available. Disabling #648
Comments
I'm getting the same issue and another issue. I followed the instructions exactly. On Linux Mint. Have a 3070 TI.
|
I wouldn't normally +1 this, but I saw that a similar issue has recently been closed, so feel I should also add my voice, I am also having this problem - pretty fresh install on pop_os (ubuntu) |
Check if your installed torch has cuda support normally I check it with this |
I did that, its not. I just noticed on the readme:
? should we be installing the nightly CPU version? |
I have exactly the same error on Windows 10, GTX980Ti |
FWIW I decided to uninstall miniconda, reinstall full anaconda.. |
thing is, that couple weeks ago I did all according to listed tts manual and all worked. On both ubuntu and windows systems (I have a spare ssd to experiment around and reinstall if needed) and on fresh installs, given that you have cuda installed and drivers - all worked. Now, doing the same - it throws cuda error. So I reckon it is something with recent update. Thanks @heldenby I will give a try to installing not miniconda, but conda instead and also wil try to install older release of tts. Will edit once have a result |
unfortunately uninstalling miniconda, installing conda brings to the same result.
I will try to wipe out ssd, reinstall windows, cuda and driver and go with conda, to see if it helps... |
I had exactly the same issue and struggled with it for some hours. The solution for me was to do a clean install of torch like described here: pip uninstall torch |
This did not resolve the issue. |
Tried to take the command from https://pytorch.org/get-started/locally/ But what helped is:
meanwhile if doing as written in repo: returns an error (I reproduced it on many newly fresh installs) If using cuda 11.7 what helped to me was to reset repo to mentioned above commit 5bbb0e0 and then execute python install script. You will have DS and other errors that are normal on windows (had this in the past, I think that in the past dependencies were a mess, or something was running out of ram and not installed - idk) - and this comment helps to solve it. But if doing so - you will lack nice new feature about faster inference read. I want to say big thanks to all devs and contributors for all work that is being done and to all who tried to help here. One more thing I noticed is that rtx 3060 has 256 samples used when reading with both standard and high_quality presets, but I guess I will open a separate issue on this one (tried with 3080 and all was ok) |
I had the same problem using the docker image, but using an older version of pytorch helped me. |
Recreating the conda environment, then installing pytorch and transformers with pip instead of conda solved this issue for me:
|
This worked for me! |
same issue , working for me . u save my life . thank u . |
Confirmed that this worked for me. I actually had more issues when using Miniconda which I eventually gave up on. Installing using Pip got me up to the point of the error in this thread, then your suggestion worked. |
your suggestions worked! Here is a gist of it all that I used to get everything working. |
This also solved it for me (On windows 11). Thank you |
The torch versions are mixed in the Dockerfile. According to ChatGPT (no guarantees - I did it manually and didn't test this file), the new Dockerfile should be:
|
a couple of these suggestions worked for me, but this one was the fastest. thanks for posting. :) |
@0xMatthew Can you share your CUDA version? It's not work for me. |
Strangely, the following code can output normally, but running the project still got "CUDA is not available. Disabling". I tried all the methods above.
|
from nvidia-smi: |
This should add the
|
I have a system with rtx3060M
Win10 (fresh install: only git, miniconda, cuda, nv driver and tortoise tts)
Nvidia driver is also installed
Cuda is 12.3 (also tried 11.7 - result the same) when typing in terminal:
but when doing a test tts, after downloading files:
Generating autoregressive samples.. C:\ProgramData\miniconda3\envs\tortoise\lib\site-packages\torch\amp\autocast_mode.py:250: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
and then it uses cpu to generate samples. I need to have it done with gpu. Please help
The text was updated successfully, but these errors were encountered: