Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Warning! Reached max decoder steps #174

Closed
hadaev8 opened this issue Mar 29, 2019 · 23 comments
Closed

Warning! Reached max decoder steps #174

hadaev8 opened this issue Mar 29, 2019 · 23 comments

Comments

@hadaev8
Copy link

hadaev8 commented Mar 29, 2019

I had no such problem before
But today used pretrained models
And sometimes spectrogram is bugged

here is notebook for reproduce
https://colab.research.google.com/drive/1jR12cEKdkg0hlDUHGhf2fPb0RwqPwEYj?#scrollTo=CyBu2F7eisFM

@rafaelvalle
Copy link
Contributor

Take a look at the gate outputs and decrease the gate threshold accordingly.

@hadaev8
Copy link
Author

hadaev8 commented Apr 1, 2019

@rafaelvalle
But I can't get, why spectrogram different for same text?
And why I never notice it before? Last time used in January.
Maybe it because of fc0d34c ?

Also, gate looks this

@rafaelvalle
Copy link
Contributor

The spectrogram is different with the same text because of the dropout on the Prenet layer: https://github.com/NVIDIA/tacotron2/blob/master/model.py#L100

The gate threshold is what controls when the model should stop decoding. You can decrease the value to prevent such issues.

@hadaev8
Copy link
Author

hadaev8 commented Apr 2, 2019

Soo, I get this error with pretrained tacatron and default code from repo, don't you want investigate?

@rafaelvalle
Copy link
Contributor

What sentence did you use?

@hadaev8
Copy link
Author

hadaev8 commented Apr 2, 2019

Seems it happens sometimes if I have whitespace at the end, like:
'We have started the invasion. '

@RoelVdP
Copy link

RoelVdP commented Apr 19, 2019

I too think this issue may have been closed out too early. I sometimes see this error. Seems to happen with shorter sentences.

I think I found a valid input for which it always happens; try "%1" as the only input. i.e. try and let it speak "%1". Make sure %1 is not interpreted as some variable or something. The sound should be (incorrectly) "one" and then 11-40 seconds of random "singing" noise (and the decoder steps error will show).

This happens with pre-trained model. Reducing gate threshold does not help. Increasing decoder steps sometimes works (but not for this example). It is also starting to look somewhat sporadic/case based.

@RoelVdP
Copy link

RoelVdP commented Apr 19, 2019

I tested/confirmed that removing the change in fc0d34c makes no difference; the error still happens.

@shoegazerstella
Copy link

I am having the same issue.
My input text is hello, this is the -weird- generated wav file. The good thing is that the tts does not crash, but it seems the last letter o is repeated for 9 seconds.
Would it be possible to avoid this? maybe by attaching some silence at the end of the phrase if this is too short?

@rafaelvalle
Copy link
Contributor

rafaelvalle commented Apr 22, 2019

The model is very sensitive to punctuation.
Make sure to trim trailing spaces and always using some punctuation.
In the hello case, try adding the period: "hello."
You can also try modifying the gate threshold.

@haqkiemdaim
Copy link

I am having the same issue.
My input text is hello, this is the -weird- generated wav file. The good thing is that the tts does not crash, but it seems the last letter o is repeated for 9 seconds.
Would it be possible to avoid this? maybe by attaching some silence at the end of the phrase if this is too short?

Hi there @shoegazerstella just want to ask, have you solve the decoder issue ? What would be the best solution ?

@rafaelvalle
Copy link
Contributor

rafaelvalle commented Nov 2, 2019

@haqkiemdaim
Have you tried adding a '.' to the text input: "Hello." instead of "Hello" ?

@haqkiemdaim
Copy link

@haqkiemdaim
Have you tried adding a '.' to the text input: "Hello." instead of "Hello" ?

Haven't yet. But my dataset does not contain any punctuation.

@rafaelvalle
Copy link
Contributor

rafaelvalle commented Nov 3, 2019

You can add to every sentence of your dataset a token that signifies EOS (End of Sentence).

@haqkiemdaim
Copy link

You can add to every sentence of your dataset a token that signifies EOS (End of Sentence).

ouhh i see. Is it crucial? because in plain tacotron, i used the same data/corpus without punctuation and it went well. But not in Tacotron2. Any reason for that ?

@rafaelvalle
Copy link
Contributor

rafaelvalle commented Nov 3, 2019 via email

@rafaelvalle
Copy link
Contributor

rafaelvalle commented Nov 3, 2019 via email

@haqkiemdaim
Copy link

@rafaelvalle This is what i have done so far:

  • remove trailing space
  • change max decoder steps from 500 to 2000

But still reach max decoder steps at 4th epoch.

Now i'm running again my dataset but with punctuation "." at end of sentence for every line.

Will update here the outcomes.

Regarding the gate threshold, currently it is 0.5.

But im a bit confuse, why we need the gate threshold while we already set the max decoder steps to a certain value ?

@SomeUserName1
Copy link

@haqkiemdaim Did adding punctuation fix the error?

@shoegazerstella
Copy link

@haqkiemdaim Did adding punctuation fix the error?

it worked for my case. thanks!

@thangdc94
Copy link

thangdc94 commented Dec 11, 2019

@rafaelvalle
If EOS work why don't you guys make it as default option in symbols.py?
I see other contributors provide answer related to EOS such as accelerate convergence of attention here.

@ksaidin ksaidin mentioned this issue May 8, 2020
@ErfolgreichCharismatisch
Copy link

ErfolgreichCharismatisch commented Aug 19, 2021

How do you try except the message "Warning! Reached max decoder steps"? Which file creates it?
EDIT: I found it at https://github.com/NVIDIA/tacotron2/blob/185cd24e046cc1304b4f8e564734d2498c6e2e6f/model.py

@coding-dallas
Copy link

I had no such problem before But today used pretrained models And sometimes spectrogram is bugged

here is notebook for reproduce https://colab.research.google.com/drive/1jR12cEKdkg0hlDUHGhf2fPb0RwqPwEYj?#scrollTo=CyBu2F7eisFM

I had no such problem before But today used pretrained models And sometimes spectrogram is bugged

here is notebook for reproduce https://colab.research.google.com/drive/1jR12cEKdkg0hlDUHGhf2fPb0RwqPwEYj?#scrollTo=CyBu2F7eisFM

the notebook link is not opening

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants