Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Latest model configuration is not compatible with master #423

Closed
Mic92 opened this issue Jun 13, 2020 · 8 comments
Closed

Latest model configuration is not compatible with master #423

Mic92 opened this issue Jun 13, 2020 · 8 comments

Comments

@Mic92
Copy link
Contributor

Mic92 commented Jun 13, 2020

I tried https://drive.google.com/open?id=10ymOlWHutqTtfDYhIbHULn2IKDKP0O9m
with master and got the following error:

$ python -m TTS.server --pwgan_config ./pwgan/config.json --pwgan_file ./pwgan/best_model.pth.tar --tts_config ./tts/config.json --tts_checkpoint ./tts/best_model.pth.tar 
  19   │     Missing key(s) in state_dict: "encoder.convolutions.0.convolution1d.weight", "encoder.convolutions.0.convolution1d.bias", "encoder.convolutions.0.batch_normaliz
       │ ation.weight", "encoder.convolutions.0.batch_normalization.bias", "encoder.convolutions.0.batch_normalization.running_mean", "encoder.convolutions.0.batch_normaliza
       │ tion.running_var", "encoder.convolutions.1.convolution1d.weight", "encoder.convolutions.1.convolution1d.bias", "encoder.convolutions.1.batch_normalization.weight",
       │ "encoder.convolutions.1.batch_normalization.bias", "encoder.convolutions.1.batch_normalization.running_mean", "encoder.convolutions.1.batch_normalization.running_va
       │ r", "encoder.convolutions.2.convolution1d.weight", "encoder.convolutions.2.convolution1d.bias", "encoder.convolutions.2.batch_normalization.weight", "encoder.convol
       │ utions.2.batch_normalization.bias", "encoder.convolutions.2.batch_normalization.running_mean", "encoder.convolutions.2.batch_normalization.running_var", "decoder.pr
       │ enet.linear_layers.0.linear_layer.weight", "decoder.prenet.linear_layers.0.batch_normalization.weight", "decoder.prenet.linear_layers.0.batch_normalization.bias", "
       │ decoder.prenet.linear_layers.0.batch_normalization.running_mean", "decoder.prenet.linear_layers.0.batch_normalization.running_var", "decoder.prenet.linear_layers.1.
       │ linear_layer.weight", "decoder.prenet.linear_layers.1.batch_normalization.weight", "decoder.prenet.linear_layers.1.batch_normalization.bias", "decoder.prenet.linear
       │ _layers.1.batch_normalization.running_mean", "decoder.prenet.linear_layers.1.batch_normalization.running_var", "decoder.attention.location_layer.location_conv1d.wei
       │ ght", "postnet.convolutions.0.convolution1d.weight", "postnet.convolutions.0.convolution1d.bias", "postnet.convolutions.0.batch_normalization.weight", "postnet.conv
       │ olutions.0.batch_normalization.bias", "postnet.convolutions.0.batch_normalization.running_mean", "postnet.convolutions.0.batch_normalization.running_var", "postnet.
       │ convolutions.1.convolution1d.weight", "postnet.convolutions.1.convolution1d.bias", "postnet.convolutions.1.batch_normalization.weight", "postnet.convolutions.1.batc
       │ h_normalization.bias", "postnet.convolutions.1.batch_normalization.running_mean", "postnet.convolutions.1.batch_normalization.running_var", "postnet.convolutions.2.
       │ convolution1d.weight", "postnet.convolutions.2.convolution1d.bias", "postnet.convolutions.2.batch_normalization.weight", "postnet.convolutions.2.batch_normalization
       │ .bias", "postnet.convolutions.2.batch_normalization.running_mean", "postnet.convolutions.2.batch_normalization.running_var", "postnet.convolutions.3.convolution1d.w
       │ eight", "postnet.convolutions.3.convolution1d.bias", "postnet.convolutions.3.batch_normalization.weight", "postnet.convolutions.3.batch_normalization.bias", "postne
       │ t.convolutions.3.batch_normalization.running_mean", "postnet.convolutions.3.batch_normalization.running_var", "postnet.convolutions.4.convolution1d.weight", "postne
       │ t.convolutions.4.convolution1d.bias", "postnet.convolutions.4.batch_normalization.weight", "postnet.convolutions.4.batch_normalization.bias", "postnet.convolutions.
       │ 4.batch_normalization.running_mean", "postnet.convolutions.4.batch_normalization.running_var".
  20   │     Unexpected key(s) in state_dict: "encoder.convolutions.0.net.0.weight", "encoder.convolutions.0.net.0.bias", "encoder.convolutions.0.net.1.weight", "encoder.con
       │ volutions.0.net.1.bias", "encoder.convolutions.0.net.1.running_mean", "encoder.convolutions.0.net.1.running_var", "encoder.convolutions.0.net.1.num_batches_tracked"
       │ , "encoder.convolutions.1.net.0.weight", "encoder.convolutions.1.net.0.bias", "encoder.convolutions.1.net.1.weight", "encoder.convolutions.1.net.1.bias", "encoder.c
       │ onvolutions.1.net.1.running_mean", "encoder.convolutions.1.net.1.running_var", "encoder.convolutions.1.net.1.num_batches_tracked", "encoder.convolutions.2.net.0.wei
       │ ght", "encoder.convolutions.2.net.0.bias", "encoder.convolutions.2.net.1.weight", "encoder.convolutions.2.net.1.bias", "encoder.convolutions.2.net.1.running_mean",
       │ "encoder.convolutions.2.net.1.running_var", "encoder.convolutions.2.net.1.num_batches_tracked", "decoder.prenet.layers.0.linear_layer.weight", "decoder.prenet.layer
       │ s.0.bn.weight", "decoder.prenet.layers.0.bn.bias", "decoder.prenet.layers.0.bn.running_mean", "decoder.prenet.layers.0.bn.running_var", "decoder.prenet.layers.0.bn.
       │ num_batches_tracked", "decoder.prenet.layers.1.linear_layer.weight", "decoder.prenet.layers.1.bn.weight", "decoder.prenet.layers.1.bn.bias", "decoder.prenet.layers.
       │ 1.bn.running_mean", "decoder.prenet.layers.1.bn.running_var", "decoder.prenet.layers.1.bn.num_batches_tracked", "decoder.attention.location_layer.location_conv.weig
       │ ht", "postnet.convolutions.0.net.0.weight", "postnet.convolutions.0.net.0.bias", "postnet.convolutions.0.net.1.weight", "postnet.convolutions.0.net.1.bias", "postne
       │ t.convolutions.0.net.1.running_mean", "postnet.convolutions.0.net.1.running_var", "postnet.convolutions.0.net.1.num_batches_tracked", "postnet.convolutions.1.net.0.
       │ weight", "postnet.convolutions.1.net.0.bias", "postnet.convolutions.1.net.1.weight", "postnet.convolutions.1.net.1.bias", "postnet.convolutions.1.net.1.running_mean
       │ ", "postnet.convolutions.1.net.1.running_var", "postnet.convolutions.1.net.1.num_batches_tracked", "postnet.convolutions.2.net.0.weight", "postnet.convolutions.2.ne
       │ t.0.bias", "postnet.convolutions.2.net.1.weight", "postnet.convolutions.2.net.1.bias", "postnet.convolutions.2.net.1.running_mean", "postnet.convolutions.2.net.1.ru
       │ nning_var", "postnet.convolutions.2.net.1.num_batches_tracked", "postnet.convolutions.3.net.0.weight", "postnet.convolutions.3.net.0.bias", "postnet.convolutions.3.
       │ net.1.weight", "postnet.convolutions.3.net.1.bias", "postnet.convolutions.3.net.1.running_mean", "postnet.convolutions.3.net.1.running_var", "postnet.convolutions.3
       │ .net.1.num_batches_tracked", "postnet.convolutions.4.net.0.weight", "postnet.convolutions.4.net.0.bias", "postnet.convolutions.4.net.1.weight", "postnet.convolution
       │ s.4.net.1.bias", "postnet.convolutions.4.net.1.running_mean", "postnet.convolutions.4.net.1.running_var", "postnet.convolutions.4.net.1.num_batches_tracked".

Here is the full backtrace:
log.txt

How can I test things on master? I don't have the resources to train my own models.

Update The latest commit that works for me is around 53b2462

@Mic92
Copy link
Contributor Author

Mic92 commented Jun 13, 2020

Ok. 1d3c0c8 also does not work. Maybe its a different problem than.

@lexkoro
Copy link
Contributor

lexkoro commented Jun 13, 2020

Did you use the branch https://github.com/mozilla/TTS/tree/20a6ab3 linked in the wiki?

@Mic92
Copy link
Contributor Author

Mic92 commented Jun 13, 2020

@sanjaesc it works with 20a6ab3. However my point is that there should be always at least one model working with master. Otherwise the library is broken and no one except insiders can work with it.

@reuben
Copy link
Contributor

reuben commented Jun 13, 2020

The library is under active development. If you don't have resources to train your own model, then stick to a commit that matches a pre-trained model. It's quite simple.

@reuben
Copy link
Contributor

reuben commented Jun 13, 2020

Closing because there is no guarantee of master compatibility for releases.

@reuben reuben closed this as completed Jun 13, 2020
@Mic92
Copy link
Contributor Author

Mic92 commented Jun 14, 2020

The library is under active development. If you don't have resources to train your own model, then stick to a commit that matches a pre-trained model. It's quite simple.

How do I contribute fixes than if I am stuck on old code? For example I fixed vocoder on a older revision but I cannot test if it is still broken in newer versions. How do you do unit testing if you don't have a working model?

@reuben
Copy link
Contributor

reuben commented Jun 14, 2020

Due to constant architectural changes it's not feasible to keep an updated model always available for master. We train a small testing model for unit tests. For behavioral fixes I guess there's no other way around training a model to see if it still applies.

@erogol
Copy link
Contributor

erogol commented Jun 15, 2020

@Mic92 There is a dummy model we use for unit testing. You can see it under tests folder. It just does random predictions as it is used for server and model inference tests. If you have a better suggestion I'd be happy to hear

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants