Skip to content
This repository has been archived by the owner on Nov 11, 2023. It is now read-only.

KeyError: "param 'initial_lr' is not specified in param_groups[0] when resuming an optimizer" #19

Closed
Likkkez opened this issue Mar 13, 2023 · 3 comments

Comments

@Likkkez
Copy link

Likkkez commented Mar 13, 2023

I'm trying to finetune 4.0-v2 using this checkpoint I found https://huggingface.co/cr941131/sovits-4.0-v2-hubert/tree/main
(not sure if its good or not)
But when I try to start training this error happens:

Traceback (most recent call last):
  File "/home/manjaro/.conda/envs/soft-vc/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap
    fn(i, *args)
  File "/media/manjaro/NVME_2tb/NeuralNetworks/so-vits-svc-v2-44100/train.py", line 112, in run
    scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2)
  File "/home/manjaro/.conda/envs/soft-vc/lib/python3.8/site-packages/torch/optim/lr_scheduler.py", line 583, in __init__
    super(ExponentialLR, self).__init__(optimizer, last_epoch, verbose)
  File "/home/manjaro/.conda/envs/soft-vc/lib/python3.8/site-packages/torch/optim/lr_scheduler.py", line 42, in __init__
    raise KeyError("param 'initial_lr' is not specified "
KeyError: "param 'initial_lr' is not specified in param_groups[0] when resuming an optimizer"

Where can I find official checkpoints if that one is bad?

@Miuzarte
Copy link
Contributor

Miuzarte commented Mar 13, 2023

Distribution of pretrained models is under planning.

@Karimsultan
Copy link

I'm trying to finetune 4.0-v2 using this checkpoint I found https://huggingface.co/cr941131/sovits-4.0-v2-hubert/tree/main

(not sure if its good or not)

But when I try to start training this error happens:


Traceback (most recent call last):

  File "/home/manjaro/.conda/envs/soft-vc/lib/python3.8/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap

    fn(i, *args)

  File "/media/manjaro/NVME_2tb/NeuralNetworks/so-vits-svc-v2-44100/train.py", line 112, in run

    scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2)

  File "/home/manjaro/.conda/envs/soft-vc/lib/python3.8/site-packages/torch/optim/lr_scheduler.py", line 583, in __init__

    super(ExponentialLR, self).__init__(optimizer, last_epoch, verbose)

  File "/home/manjaro/.conda/envs/soft-vc/lib/python3.8/site-packages/torch/optim/lr_scheduler.py", line 42, in __init__

    raise KeyError("param 'initial_lr' is not specified "

KeyError: "param 'initial_lr' is not specified in param_groups[0] when resuming an optimizer"

Where can I find official checkpoints if that one is bad?

@NaruseMioShirakana
Copy link
Contributor

For some scary reason, we removed the pre-training model, and there is currently no official way to get the pre-training model.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants