Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Misbehaving losses while training RawNet1 #23

Closed
sparshsinha123 opened this issue Nov 15, 2021 · 2 comments
Closed

Misbehaving losses while training RawNet1 #23

sparshsinha123 opened this issue Nov 15, 2021 · 2 comments

Comments

@sparshsinha123
Copy link

Hey, @Jungjee I was trying to fine-tune RawNet1 and observed that the center-loss does not decrease as the training proceeds, although the total-loss does go down. I tried to increase the weight of the center-loss but still see similar patterns. If the weight is too high, the spk-basis-loss starts to increase. So, it seems that both these losses are somewhat inversely related. Did you observe similar trends? I have attached the plots for the losses for different values of the c_lambda parameter (which is the weight of the center-loss). Is there something that could be going wrong here? In all the cases above the learning-rate has been set to 0.001.

c_lambda=0.001(default) c_lambda=0.1
c_lambda=0.001 c_lambda=0.1
c_lambda=0.5 c_lambda=5
c_lambda=0.5 c_lambda=5
@Jungjee
Copy link
Owner

Jungjee commented Nov 17, 2021

Hi, first of all, this tendency is different to what I observed before.
I recall all three losses decreased in the end.

Are the plots show training until the last epoch?
Since you mentioned the total-loss is decreasing, I'm thinking that the center loss might start to decrease quiet after the starting point.

My suggestion would be training the model using more epochs or smaller batch size to increase the number of total iterations.
Looking at the result of using C_lambda=5, I think with proper balance and enough iterations, you can get low values for all losses

@sparshsinha123
Copy link
Author

No, these plots are not for the last epoch. They are only for the initial few epochs.

Will try to train it for some more epochs with smaller batch-size to see if both the losses decrease.

@Jungjee Jungjee closed this as completed Mar 16, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants