Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: module 'torch' has no attribute 'compile' #85

Open
bluusun opened this issue Feb 9, 2024 · 8 comments
Open

AttributeError: module 'torch' has no attribute 'compile' #85

bluusun opened this issue Feb 9, 2024 · 8 comments

Comments

@bluusun
Copy link

bluusun commented Feb 9, 2024

Python 3.10.10 (main, Mar 23 2023, 03:59:34) [GCC 10.2.1 20210110] on linux
Type "help", "copyright", "credits" or "license" for more information.

from whisperspeech.pipeline import Pipeline

pipe = Pipeline(torch_compile=True)

Failed to load the T2S model:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/whisperspeech/pipeline.py", line 49, in init
if optimize: self.t2s.optimize(torch_compile=torch_compile)
File "/usr/local/lib/python3.10/site-packages/whisperspeech/t2s_up_wds_mlang_enclm.py", line 383, in optimize
self.generate_next = torch.compile(self.generate_next, mode="reduce-overhead", fullgraph=True)
AttributeError: module 'torch' has no attribute 'compile'

Failed to load the S2A model:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/whisperspeech/pipeline.py", line 57, in init
if optimize: self.s2a.optimize(torch_compile=torch_compile)
File "/usr/local/lib/python3.10/site-packages/whisperspeech/s2a_delar_mup_wds_mlang.py", line 452, in optimize
self.generate_next = torch.compile(self.generate_next, mode="reduce-overhead", fullgraph=True)
AttributeError: module 'torch' has no attribute 'compile'

pipe.generate_to_file("output.wav", "Hello from WhisperSpeech.")
Traceback (most recent call last):
File "", line 1, in
File "/usr/local/lib/python3.10/site-packages/whisperspeech/pipeline.py", line 90, in generate_to_file
self.vocoder.decode_to_file(fname, self.generate_atoks(text, speaker, lang=lang, cps=cps, step_callback=None))
File "/usr/local/lib/python3.10/site-packages/whisperspeech/pipeline.py", line 82, in generate_atoks
stoks = self.t2s.generate(text, cps=cps, lang=lang, step=step_callback)[0]
File "/usr/local/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/whisperspeech/t2s_up_wds_mlang_enclm.py", line 462, in generate
xenc, xenc_positions, cps_emb = self.run_encoder(ttoks, langs, cpss)
File "/usr/local/lib/python3.10/site-packages/whisperspeech/t2s_up_wds_mlang_enclm.py", line 297, in run_encoder
xenc = self.encoder(in_ttoks.to(torch.long), positions, lang_emb=lang_embs)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.10/site-packages/whisperspeech/t2s_up_wds_mlang_enclm.py", line 204, in forward
for l in self.layers: x = l(x, positions,
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.10/site-packages/whisperspeech/modules.py", line 222, in forward
x = x + self.attn(lnx, x_positions, lnx, x_positions, causal=causal, mask=mask)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.10/site-packages/whisperspeech/modules.py", line 146, in forward
wv = F.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0, is_causal=causal)
AttributeError: module 'torch.nn.functional' has no attribute 'scaled_dot_product_attention'. Did you mean: '_scaled_dot_product_attention'?

@bluusun
Copy link
Author

bluusun commented Feb 9, 2024

Installed latest pip3 version using pip3 install -U WhisperSpeech, Cuda 12.2

@BBC-Esq
Copy link
Contributor

BBC-Esq commented Feb 9, 2024

Are you using a custom script? Can you try using the example script I uploaded to the "examples" folder?

@bluusun
Copy link
Author

bluusun commented Feb 9, 2024

I am using just the example code from here:

https://huggingface.co/spaces/collabora/WhisperSpeech

from whisperspeech.pipeline import Pipeline

pipe = Pipeline(torch_compile=True)
pipe.generate_to_file("output.wav", "Hello from WhisperSpeech.")

@BBC-Esq
Copy link
Contributor

BBC-Esq commented Feb 9, 2024

Good luck!

@bluusun
Copy link
Author

bluusun commented Feb 9, 2024

Good luck!

LOL - there is no fix?

@BBC-Esq
Copy link
Contributor

BBC-Esq commented Feb 9, 2024

I already suggested you try the script that I created in the examples folder, and if you do, I can possibly offer some insight.

@bluusun
Copy link
Author

bluusun commented Feb 9, 2024

Oh your script worked perfectly :) May I ask where is the option to use my own voice in the code? I can see it on Huggingface but not in the script/readme.

@BBC-Esq
Copy link
Contributor

BBC-Esq commented Feb 9, 2024

You posted your message while I was writing this...but I'm going to finish it because I believe it might still be useful to troubleshoot why the web-based script you gave didn't work. Here goes...

First, here's what chatgpt says based on the error print you initially provided:

image

I am personally using CUDA 11.8 and Pytorch version 2.1.2 on Windows. Therefore, I can't help directly troubleshoot Linux, but here's what I'd recommend.

  1. Try CUDA 11.8 instead of 12+.
  2. If you install CUDA 11.8, make sure you properly install Pytorch.

If your GPU is Nvidia you'll need to use the following commands (python 3.10):

pip3 install https://download.pytorch.org/whl/cu118/torch-2.1.2%2Bcu118-cp310-cp310-linux_x86_64.whl#sha256=60396358193f238888540f4a38d78485f161e28ec17fa445f0373b5350ef21f0
pip3 install https://download.pytorch.org/whl/cu118/torchaudio-2.1.2%2Bcu118-cp310-cp310-linux_x86_64.whl#sha256=b39468862d34a3a89af4db333bc935a02525a509b2c8949f638f83eb6061da02
pip3 install https://download.pytorch.org/whl/cu118/torchvision-0.16.2%2Bcu118-cp310-cp310-linux_x86_64.whl#sha256=18470aef0bbde73f5a6a96135cd457f4d8be31f60be7ceae4ef5174f02f73add

If you are using an AMD gpu on Linux, know that Pytorch only supports gpu-acceleration based on AMD gpus on Linux, not on Windows. Moreover, it requires a different pytorch installation and a prerequisite of "rocM." I can't test because I don't use Linux nor AMD gpus, but here are the pip installs assuming you get rocM working:

pip3 install https://download.pytorch.org/whl/rocm5.6/torch-2.1.2%2Brocm5.6-cp310-cp310-linux_x86_64.whl#sha256=2e1d91e3d1e037e3c2588e33deb69c75a5146cd3b50f088bf73a6450c2c78ba8
pip3 install https://download.pytorch.org/whl/rocm5.6/torchaudio-2.1.2%2Brocm5.6-cp310-cp310-linux_x86_64.whl#sha256=4b69320bba7a3260a408dad29288616232d4f74216ba831c5aa3504b2f48b7bc
pip3 install https://download.pytorch.org/whl/rocm5.6/torchvision-0.16.2%2Brocm5.6-cp310-cp310-linux_x86_64.whl#sha256=3055d9f8b924fa193298dd33a75415a78da0e4fa75e87131c5f0d1a8a9ca2fcf

THIS ASSUMES ROCM VERSION 5.6. There are different commands for 5.5. You can explore any and all links to wheels at these links:

https://download.pytorch.org/whl/cu118
https://download.pytorch.org/whl/rocm5.6

If you plan on using Pytorch 2.2.2 the proper commands can be obtained here:

https://pytorch.org/get-started/locally/

Note that the only available rocM specified is 5.7 though so...Pytorch only gives you the nice fancy calculator of the proper commands for it's latest stable version (2.2.2) and its "nightly" version. Follow my instructions for the slightly older commands or you can also go here:

https://pytorch.org/get-started/previous-versions/

However, I've noticed that sometimes these don't work as well as locating the specific wheels yourself. Also, make sure and install pytorch after you install whisperspeech...I think a dependency of WhisperSpeech - "speechbrain" perhaps - also installs pytorch, but it installs a cpu versionl...anyways, reinstalling with the proper command should override that...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

2 participants