Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Main wav2lip training expected loss behavior (wav2lip_train.py) #34

Closed
hannarud opened this issue Dec 1, 2022 · 2 comments
Closed

Main wav2lip training expected loss behavior (wav2lip_train.py) #34

hannarud opened this issue Dec 1, 2022 · 2 comments

Comments

@hannarud
Copy link

hannarud commented Dec 1, 2022

Hi @primepake!
Could you describe briefly how your main wav2lip training behaved?
My situation is the following: syncnet is trained until 0.29 eval loss (higher than 0.25, but I'd like to try with that). I start wav2lip_train.py, which goes okay, eval sync loss is descreasing, reaching 0.75, then syncnet_wt gets set to 0.01. It happens on 280000 steps roughly.

After that train sync loss starts to raise from 0 (this is an expected behavior), eval sync loss starts to decrease fast. On 340000 steps both of these values are stabilized at 0.34. After that no matter how much I train, train sync loss decreases (I trained it up to 0.24 at 1200000 steps) while eval sync loss decreases much slower and from some point (600000 steps roughly) it simply stays at 0.3 and fluctuates there. It's a pity I don't have a plot for the losses :( but seems like train loss first reaches some "point of balance" and then constantly decreases while eval loss decreases up to 0.3 and then stays there.

So, my questions are:

  • Did you train your model up to 0.2 eval sync loss, as it is advised in original repo?
  • What were the values of your train sync loss then?
  • How much steps it took? (I know this can differ depending on the parameters of training, but still)
@ghost
Copy link

ghost commented Dec 1, 2022

Good questions!

  • Actually our eval loss below 0.25, you need to modify some change the code
  • That can be more than 200k steps

@ghost ghost closed this as completed Dec 2, 2022
@zizh01
Copy link

zizh01 commented Jun 12, 2023

@hannarud Hi! could you please tell me that if you finally get a proper strategy for training this 288 scheme or get good visualization using this repo? Thx!

This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants