You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Running CPU only just to test things out as I have an AMD GPU. I am able to run do_tts.py and read.py (eventually), but when attempting to run read_fast.py, the following error appears: Traceback (most recent call last): File "c:\Users\victor\tortoise-tts\tortoise\read_fast.py", line 33, in <module> tts = TextToSpeech(models_dir=args.model_dir, use_deepspeed=args.use_deepspeed, kv_cache=args.kv_cache, half=args.half) File "c:\Users\victor\tortoise-tts\tortoise\api_fast.py", line 222, in __init__ hifi_model = torch.load(get_model_path('hifidecoder.pth')) File "D:\miniconda3\envs\tortoise\lib\site-packages\torch\serialization.py", line 1026, in load return _load(opened_zipfile, File "D:\miniconda3\envs\tortoise\lib\site-packages\torch\serialization.py", line 1438, in _load result = unpickler.load() File "D:\miniconda3\envs\tortoise\lib\site-packages\torch\serialization.py", line 1408, in persistent_load typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location)) File "D:\miniconda3\envs\tortoise\lib\site-packages\torch\serialization.py", line 1382, in load_tensor wrap_storage=restore_location(storage, location), File "D:\miniconda3\envs\tortoise\lib\site-packages\torch\serialization.py", line 391, in default_restore_location result = fn(storage, location) File "D:\miniconda3\envs\tortoise\lib\site-packages\torch\serialization.py", line 266, in _cuda_deserialize device = validate_cuda_device(location) File "D:\miniconda3\envs\tortoise\lib\site-packages\torch\serialization.py", line 250, in validate_cuda_device raise RuntimeError('Attempting to deserialize object on a CUDA ' RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
Using no programming knowledge and only my basic ability to read, I changed line 33 in api_fast.py from hifi_model = torch.load(get_model_path('hifidecoder.pth'))
to hifi_model = torch.load(get_model_path('hifidecoder.pth'), map_location=torch.device('cpu'))
This resolved the error.
It seems like something is failing to check for CUDA at or before this point in the script, but I really don't know enough to say for sure. Since it is working for me now I thought it best to bring it up.
The text was updated successfully, but these errors were encountered:
Running CPU only just to test things out as I have an AMD GPU. I am able to run do_tts.py and read.py (eventually), but when attempting to run read_fast.py, the following error appears:
Traceback (most recent call last): File "c:\Users\victor\tortoise-tts\tortoise\read_fast.py", line 33, in <module> tts = TextToSpeech(models_dir=args.model_dir, use_deepspeed=args.use_deepspeed, kv_cache=args.kv_cache, half=args.half) File "c:\Users\victor\tortoise-tts\tortoise\api_fast.py", line 222, in __init__ hifi_model = torch.load(get_model_path('hifidecoder.pth')) File "D:\miniconda3\envs\tortoise\lib\site-packages\torch\serialization.py", line 1026, in load return _load(opened_zipfile, File "D:\miniconda3\envs\tortoise\lib\site-packages\torch\serialization.py", line 1438, in _load result = unpickler.load() File "D:\miniconda3\envs\tortoise\lib\site-packages\torch\serialization.py", line 1408, in persistent_load typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location)) File "D:\miniconda3\envs\tortoise\lib\site-packages\torch\serialization.py", line 1382, in load_tensor wrap_storage=restore_location(storage, location), File "D:\miniconda3\envs\tortoise\lib\site-packages\torch\serialization.py", line 391, in default_restore_location result = fn(storage, location) File "D:\miniconda3\envs\tortoise\lib\site-packages\torch\serialization.py", line 266, in _cuda_deserialize device = validate_cuda_device(location) File "D:\miniconda3\envs\tortoise\lib\site-packages\torch\serialization.py", line 250, in validate_cuda_device raise RuntimeError('Attempting to deserialize object on a CUDA ' RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
Using no programming knowledge and only my basic ability to read, I changed line 33 in api_fast.py from
hifi_model = torch.load(get_model_path('hifidecoder.pth'))
to
hifi_model = torch.load(get_model_path('hifidecoder.pth'), map_location=torch.device('cpu'))
This resolved the error.
It seems like something is failing to check for CUDA at or before this point in the script, but I really don't know enough to say for sure. Since it is working for me now I thought it best to bring it up.
The text was updated successfully, but these errors were encountered: