Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cannot start training from a pre-trained model #3

Open
eschmidbauer opened this issue Jul 19, 2022 · 5 comments
Open

cannot start training from a pre-trained model #3

eschmidbauer opened this issue Jul 19, 2022 · 5 comments

Comments

@eschmidbauer
Copy link

i've downloaded the model provided for RADTTS
and im trying to use it to start training but i get the following error:

python train.py -c config.json -p train_config.ignore_layers=["speaker_embedding.weight"] train_config.checkpoint_path='models/radtts++ljs-dap.pt'

> got rank 0 and world size 1 ...
/debug
Using seed 1007
Applying spectral norm to text encoder LSTM
/root/radtts/common.py:391: UserWarning: torch.qr is deprecated in favor of torch.linalg.qr and will be removed in a future PyTorch release.
The boolean parameter 'some' has been replaced with a string parameter 'mode'.
Q, R = torch.qr(A, some)
should be replaced with
Q, R = torch.linalg.qr(A, 'reduced' if some else 'complete') (Triggered internally at  ../aten/src/ATen/native/BatchLinearAlgebra.cpp:1980.)
  W = torch.qr(torch.FloatTensor(c, c).normal_())[0]
Applying spectral norm to context encoder LSTM

Initializing RAdam optimizer
Traceback (most recent call last):
  File "/root/radtts/train.py", line 498, in <module>
    train(n_gpus, rank, **train_config)
  File "/root/radtts/train.py", line 357, in train
    model, optimizer, iteration = load_checkpoint(
  File "/root/radtts/train.py", line 181, in load_checkpoint
    iteration = checkpoint_dict['iteration']
KeyError: 'iteration'
@rafaelvalle
Copy link
Contributor

rafaelvalle commented Jul 19, 2022

nice catch! it's because checkpoint_path requires a checkpoint dictionary with an optimizer and iteration number, which we don't provide.
please use warmstart_checkpoint_path instead of checkpoint_path and let us know

python train.py -c config.json -p train_config.ignore_layers_warmstart=["speaker_embedding.weight"] train_config.warmstart_checkpoint_path=model_path.pt

@taalua
Copy link

taalua commented Jul 20, 2022

Hi, thank you for the pretrained model.
What type of config.json I should use for training? _agap, _bgap, or _dap?
I got this error:
line 187, in load_data
'emotion': d[3],

Thanks

@rafaelvalle
Copy link
Contributor

can you please pull and try again? the previous dataloader expected emotion and durations in the filelist. the current works if the filelist has only filename, text and speaker label.

Are you planning to train on new data?

@sjkoelle
Copy link

I too would be interested in seeing a sample config for warm starting. I imagine there may be differences in the learning scheduling?

@sjkoelle
Copy link

sjkoelle commented Feb 1, 2023

I've had more success training from scratch and warm starting when not ignoring the speaker embedding. Warm starting a multispeaker does not work for me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants