Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

eval loss fluctuation #39

Closed
QUTGXX opened this issue Sep 11, 2020 · 7 comments
Closed

eval loss fluctuation #39

QUTGXX opened this issue Sep 11, 2020 · 7 comments

Comments

@QUTGXX
Copy link

QUTGXX commented Sep 11, 2020

The sync loss as shows:

4000step: current averaged_loss is ---------- 1.1633416414260864
5000step: current averaged_loss is ---------- 1.9757428169250488
6000step: current averaged_loss is ---------- 1.9490289688110352
7000step: current averaged_loss is ---------- 2.3177950382232666
8000step: current averaged_loss is ---------- 1.6252386569976807
9000step: current averaged_loss is ---------- 3.818169593811035
10000step: current averaged_loss is ---------- 1.719498872756958
11000step: current averaged_loss is ---------- 1.8442809581756592
12000step: current averaged_loss is ---------- 2.4841384887695312
13000step: current averaged_loss is ---------- 2.462939977645874
14000step: current averaged_loss is ---------- 3.738591432571411
15000step: current averaged_loss is ---------- 2.688401222229004
16000step: current averaged_loss is ---------- 3.177443027496338
17000step: current averaged_loss is ---------- 1.7362573146820068
18000step: current averaged_loss is ---------- 3.5759496688842773
19000step: current averaged_loss is ---------- 3.8388853073120117
20000step: current averaged_loss is ---------- 4.14736270904541
For this model, the training loss has decreased which is around 0.1 - 0.2.

The wav2lip model eval loss:
1800step: L1: 0.019517337334755483, Sync loss: 5.680909522589875
2700step:L1: 0.01795881875151617, Sync loss: 5.4678046358124845
3600step:L1: 0.01703862974103992, Sync loss: 5.786964012620793
4500step:L1: 0.016784275337235307, Sync loss: 5.638755851397331
5400step:L1: 0.016678210001135944, Sync loss: 5.832544412830587
6300step:L1: 0.016361638768104446, Sync loss: 5.650567727150149
7200step:L1: 0.016196514041390213, Sync loss: 5.742747967151364
8100step: L1: 0.016216407553923878, Sync loss: 5.588838182910533
9000step: L1: 0.01602265675194806, Sync loss: 5.688869654707154
9900step: 0.016125425466531607, Sync loss: 5.708734381215889
10000step: L1: 0.016125425466531607, Sync loss: 5.708734381215889
10800step: L1: 0.01588278780883967, Sync loss: 5.918756739389199
11700step: L1: 0.01574758412622011, Sync loss: 5.581946962059989
12600step: L1: 0.015821209518815497, Sync loss: 5.620685570930449
13500step: L1: 0.015698263344598055, Sync loss: 5.617209954880784
14400step: L1: 0.015831564212969895, Sync loss: 5.579334572446499
15300step: L1: 0.015908794453667847, Sync loss: 5.662705282341907
16200step: L1: 0.01584938615055678, Sync loss: 5.67902198072507
17100step: L1: 0.015664026094666987, Sync loss: 5.836531450847076
1800step: L1: 0.01570050628954138, Sync loss: 5.806963780977246
1800step: L1: 0.015791057227724118, Sync loss: 5.494967527464351
1800step: L1: 0.015707670103827658, Sync loss: 5.7215739446087674
1800step: L1: 0.015890353739251, Sync loss: 5.7707554375734205
21600step: L1: 0.015616239360752867, Sync loss: 5.709768658187692
18000step: L1: 0.01574522866395843, Sync loss: 5.753696662893309
18900step: L1: 0.015643829487784953, Sync loss: 5.498267574079026
19800step: L1: 0.015661220601660208, Sync loss: 5.759692171500855
20700step: L1: 0.015491276214194195, Sync loss: 5.577403137075068
21600step: L1: 0.015616239360752867, Sync loss: 5.709768658187692
22500step: L1: 0.01574522866395843, Sync loss: 5.753696662893309
23400step: L1: 0.015643829487784953, Sync loss: 5.498267574079026
24300step: L1: 0.015661220601660208, Sync loss: 5.759692171500855
25200step: L1: 0.015491276214194195, Sync loss: 5.577403137075068
26100step: L1: 0.01578893579181268, Sync loss: 5.619578842939902

For this model, the training loss has decreased which is around 0.004.

Did you think it works well? Is it overfitting? @prajwalkr

@prajwalkr
Copy link
Collaborator

Which dataset are you training on?

@QUTGXX
Copy link
Author

QUTGXX commented Sep 11, 2020

Which dataset are you training on?

The dataset is made by myself with around 18 minutes for one person.

@prajwalkr
Copy link
Collaborator

There could be multiple issues when you are training on such a small dataset of a single person. We are unable to comment on why exactly it is not working. Note that you must train the Expert discriminator on your new dataset before training Wav2lip. Good luck with your experiment!

@QUTGXX
Copy link
Author

QUTGXX commented Sep 11, 2020

There could be multiple issues when you are training on such a small dataset of a single person. We are unable to comment on why exactly it is not working. Note that you must train the Expert discriminator on your new dataset before training Wav2lip. Good luck with your experiment!

Yep, I tried to train the expert discriminator first. Can u plz tell me the trend of loss value when you trained the LRS2 dataset?

@prajwalkr
Copy link
Collaborator

The eval/train sync loss goes down to: ~0.2. The eval/train L1 loss goes down to ~0.02.

@QUTGXX
Copy link
Author

QUTGXX commented Sep 11, 2020

The eval/train sync loss goes down to: ~0.2. The eval/train L1 loss goes down to ~0.02.

Got it, thanks for your sharing.

@1105135335
Copy link

1105135335 commented Aug 20, 2022

Please ask a question, when training the expert discriminator, do you need to reduce the averaged_loss below 0.25?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants