You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi @primepake!
Could you describe briefly how your main wav2lip training behaved?
My situation is the following: syncnet is trained until 0.29 eval loss (higher than 0.25, but I'd like to try with that). I start wav2lip_train.py, which goes okay, eval sync loss is descreasing, reaching 0.75, then syncnet_wt gets set to 0.01. It happens on 280000 steps roughly.
After that train sync loss starts to raise from 0 (this is an expected behavior), eval sync loss starts to decrease fast. On 340000 steps both of these values are stabilized at 0.34. After that no matter how much I train, train sync loss decreases (I trained it up to 0.24 at 1200000 steps) while eval sync loss decreases much slower and from some point (600000 steps roughly) it simply stays at 0.3 and fluctuates there. It's a pity I don't have a plot for the losses :( but seems like train loss first reaches some "point of balance" and then constantly decreases while eval loss decreases up to 0.3 and then stays there.
So, my questions are:
Did you train your model up to 0.2eval sync loss, as it is advised in original repo?
What were the values of your train sync loss then?
How much steps it took? (I know this can differ depending on the parameters of training, but still)
The text was updated successfully, but these errors were encountered:
@hannarud Hi! could you please tell me that if you finally get a proper strategy for training this 288 scheme or get good visualization using this repo? Thx!
Hi @primepake!
Could you describe briefly how your main wav2lip training behaved?
My situation is the following: syncnet is trained until
0.29
eval loss (higher than0.25
, but I'd like to try with that). I startwav2lip_train.py
, which goes okay, eval sync loss is descreasing, reaching0.75
, thensyncnet_wt
gets set to0.01
. It happens on280000
steps roughly.After that train sync loss starts to raise from 0 (this is an expected behavior), eval sync loss starts to decrease fast. On
340000
steps both of these values are stabilized at0.34
. After that no matter how much I train, train sync loss decreases (I trained it up to 0.24 at1200000
steps) while eval sync loss decreases much slower and from some point (600000
steps roughly) it simply stays at0.3
and fluctuates there. It's a pity I don't have a plot for the losses :( but seems like train loss first reaches some "point of balance" and then constantly decreases while eval loss decreases up to0.3
and then stays there.So, my questions are:
0.2
eval sync loss, as it is advised in original repo?The text was updated successfully, but these errors were encountered: