Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] DDC-TTS_Universal-Fullband-MelGAN_MAI-karen_savage_ES.ipynb #427

Closed
rdbadra opened this issue Apr 13, 2021 · 0 comments
Closed

[Bug] DDC-TTS_Universal-Fullband-MelGAN_MAI-karen_savage_ES.ipynb #427

rdbadra opened this issue Apr 13, 2021 · 0 comments
Labels
bug Something isn't working

Comments

@rdbadra
Copy link

rdbadra commented Apr 13, 2021

Hi, while trying to execute the Colab tutorial for synthetizing spanish speech, I got an error when executing the following line:

align, spec, stop_tokens, wav = tts(vocoder_model, sentence, TTS_CONFIG, use_cuda, ap, use_gl=False, figures=True)

This is the error:

in tts(model, text, CONFIG, use_cuda, ap, use_gl, figures)
12 t_1 = time.time()
13 waveform, alignment, mel_spec, mel_postnet_spec, stop_tokens, inputs = synthesis(model, text, CONFIG, use_cuda, ap, speaker_id, style_wav=None,
---> 14 truncated=False, enable_eos_bos_chars=CONFIG.enable_eos_bos_chars)
15 print(mel_postnet_spec.shape)
16 mel_postnet_spec = ap._denormalize(mel_postnet_spec.T).T

/content/TTS_repo/TTS/tts/utils/synthesis.py in synthesis(model, text, CONFIG, use_cuda, ap, speaker_id, style_wav, truncated, enable_eos_bos_chars, use_griffin_lim, do_trim_silence, speaker_embedding, backend)
239 if backend == 'torch':
240 decoder_output, postnet_output, alignments, stop_tokens = run_model_torch(
--> 241 model, inputs, CONFIG, truncated, speaker_id, style_mel, speaker_embeddings=speaker_embedding)
242 postnet_output, decoder_output, alignment, stop_tokens = parse_outputs_torch(
243 postnet_output, decoder_output, alignments, stop_tokens)

/content/TTS_repo/TTS/tts/utils/synthesis.py in run_model_torch(model, inputs, CONFIG, truncated, speaker_id, style_mel, speaker_embeddings)
57 else:
58 decoder_output, postnet_output, alignments, stop_tokens = model.inference(
---> 59 inputs, speaker_ids=speaker_id, speaker_embeddings=speaker_embeddings)
60 elif 'glow' in CONFIG.model.lower():
61 inputs_lengths = torch.tensor(inputs.shape[1:2]).to(inputs.device) # pylint: disable=not-callable

/usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)
25 def decorate_context(*args, **kwargs):
26 with self.class():
---> 27 return func(*args, **kwargs)
28 return cast(F, decorate_context)
29

TypeError: inference() got an unexpected keyword argument 'speaker_ids'

Thanks!

@rdbadra rdbadra added the bug Something isn't working label Apr 13, 2021
@rdbadra rdbadra closed this as completed Apr 13, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant