Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

read_fast.py script breaks without cuda #752

Open
vicbois390 opened this issue Mar 20, 2024 · 0 comments
Open

read_fast.py script breaks without cuda #752

vicbois390 opened this issue Mar 20, 2024 · 0 comments

Comments

@vicbois390
Copy link

Running CPU only just to test things out as I have an AMD GPU. I am able to run do_tts.py and read.py (eventually), but when attempting to run read_fast.py, the following error appears:
Traceback (most recent call last): File "c:\Users\victor\tortoise-tts\tortoise\read_fast.py", line 33, in <module> tts = TextToSpeech(models_dir=args.model_dir, use_deepspeed=args.use_deepspeed, kv_cache=args.kv_cache, half=args.half) File "c:\Users\victor\tortoise-tts\tortoise\api_fast.py", line 222, in __init__ hifi_model = torch.load(get_model_path('hifidecoder.pth')) File "D:\miniconda3\envs\tortoise\lib\site-packages\torch\serialization.py", line 1026, in load return _load(opened_zipfile, File "D:\miniconda3\envs\tortoise\lib\site-packages\torch\serialization.py", line 1438, in _load result = unpickler.load() File "D:\miniconda3\envs\tortoise\lib\site-packages\torch\serialization.py", line 1408, in persistent_load typed_storage = load_tensor(dtype, nbytes, key, _maybe_decode_ascii(location)) File "D:\miniconda3\envs\tortoise\lib\site-packages\torch\serialization.py", line 1382, in load_tensor wrap_storage=restore_location(storage, location), File "D:\miniconda3\envs\tortoise\lib\site-packages\torch\serialization.py", line 391, in default_restore_location result = fn(storage, location) File "D:\miniconda3\envs\tortoise\lib\site-packages\torch\serialization.py", line 266, in _cuda_deserialize device = validate_cuda_device(location) File "D:\miniconda3\envs\tortoise\lib\site-packages\torch\serialization.py", line 250, in validate_cuda_device raise RuntimeError('Attempting to deserialize object on a CUDA ' RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

Using no programming knowledge and only my basic ability to read, I changed line 33 in api_fast.py from
hifi_model = torch.load(get_model_path('hifidecoder.pth'))
to
hifi_model = torch.load(get_model_path('hifidecoder.pth'), map_location=torch.device('cpu'))

This resolved the error.

It seems like something is failing to check for CUDA at or before this point in the script, but I really don't know enough to say for sure. Since it is working for me now I thought it best to bring it up.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant