New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
an error when trying to run the thing #18
Comments
If you open an interactive python terminal and run torch.cuda.is_available(), what does it say? |
how do I go about doing that? I am not the best at doing this kind of checking stuff |
You just type "python", then "import torch" then run the command. |
I think I am doing this the rong way: every time I try to do this, it says python: can't open file 'C:\Users\thema\Downloads\import': [Errno 2] No such file or directory |
never mind, figured it out: it says false: I don't know why since I have used it to train AI models before |
Same Error here. It also Says that Cude is not avaible, but other Software can use it. Windows 10 is the Operation System in this Case |
Okay, for me the Problem was that my Cuda Version that i Downloaded from the Nvidia Cuda Website was to new for pytorch. I needed to Install a Nightly Version of Pytorch and Pytorch Audio that supported the Used Cuda Version |
Just an FYI - pytorch typically comes packaged with CUDA. It doesn't matter what you have installed on your system. That being said, you need to make sure to install a version of Pytorch with CUDA, not the CPU only version. Hopefully you both figured this out. Please re-open if I can help further. |
hello everyone
for some reason, when I run the do_tts python script, I am getting this error: I input my text and select the voice to use, but I still get this: I also have an NVIDIA GPU which I have used to train tacotron models, so I really don't know what is happening here
Traceback (most recent call last): File "C:\Users\thema\Downloads\tortoise-tts-main\do_tts.py", line 22, in tts = TextToSpeech() File "C:\Users\thema\Downloads\tortoise-tts-main\api.py", line 201, in init self.vocoder.load_state_dict(torch.load('.models/vocoder.pth')['model_g']) File "C:\Users\thema\anaconda3\lib\site-packages\torch\serialization.py", line 712, in load return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args) File "C:\Users\thema\anaconda3\lib\site-packages\torch\serialization.py", line 1046, in _load result = unpickler.load() File "C:\Users\thema\anaconda3\lib\site-packages\torch\serialization.py", line 1016, in persistent_load load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location)) File "C:\Users\thema\anaconda3\lib\site-packages\torch\serialization.py", line 1001, in load_tensor wrap_storage=restore_location(storage, location), File "C:\Users\thema\anaconda3\lib\site-packages\torch\serialization.py", line 176, in default_restore_location result = fn(storage, location) File "C:\Users\thema\anaconda3\lib\site-packages\torch\serialization.py", line 152, in _cuda_deserialize device = validate_cuda_device(location) File "C:\Users\thema\anaconda3\lib\site-packages\torch\serialization.py", line 136, in validate_cuda_device raise RuntimeError('Attempting to deserialize object on a CUDA ' RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
if anyone could help me with this, I would really appreciate it!
The text was updated successfully, but these errors were encountered: