Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Could You Please Share the Curve of Training Loss? #20

Closed
semi-supervised-paper opened this issue Nov 28, 2019 · 8 comments
Closed

Could You Please Share the Curve of Training Loss? #20

semi-supervised-paper opened this issue Nov 28, 2019 · 8 comments

Comments

@semi-supervised-paper
Copy link

Hi,
I want to use CMC in my own experiment, but the loss is strange. At each epoch, the loss decays as normal (like from 20 to 11). But at the next epoch, the loss becomes nearly the same as begining (the loss is 20 again). I wonder if it is 'normal' in CMC.

Thanks.

@HobbitLong
Copy link
Owner

HobbitLong commented Nov 28, 2019

I noticed similar pattern, but it only happens in the first few (<3~5) epochs and is not as obvious as you described. Below is an example of the loss, with x axis being the # of epochs:

ab
l

@semi-supervised-paper
Copy link
Author

Thanks for your kindly response.

@talshef
Copy link

talshef commented Mar 15, 2020

Hi, thanks for sharing the code.
I've got the same issue, the loss is decay durning the epoch but reversed to the same point in the beginning of the next epoch. did you fin a solution?

@IFICL
Copy link

IFICL commented Mar 23, 2020

I also got the same issue. I guess the reason leads to this situation might be that the features stored in the memory bank come from the previous epoch and leads to high loss when new features come in. But if you average the loss of each epoch, it still decays.

@ShaoTengLiu
Copy link

I also got the same issue, which has also been discussed in #27. Yes, the average loss still decays, but I find this may hurt the performance of CMC on small datasets.

@talshef
Copy link

talshef commented Mar 25, 2020

I also got the same issue, which has also been discussed in #27. Yes, the average loss still decays, but I find this may hurt the performance of CMC on small datasets.

I have the same experiment.
Tuning the hyperparameters: nce_k, nce_m and learning rate improved the issue in my experiments

@ShaoTengLiu
Copy link

I also got the same issue, which has also been discussed in #27. Yes, the average loss still decays, but I find this may hurt the performance of CMC on small datasets.

I have the same experiment.
Tuning the hyperparameters: nce_k, nce_m and learning rate improved the issue in my experiments

Yes, I also find decreasing nce_k can improve the performance. Could you please share some experience on tuning nce_m and lr?

@talshef
Copy link

talshef commented Mar 26, 2020

decreasing nce_m helped in some case to make the training more stable, meaning that the loss start follow the loss of the last epoch sooner.
For the lr, i played with it until i get the right shape of the loss curve. The same as:

ab
l

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants