Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training Loss Not Improving #6

Closed
sherrychen127 opened this issue Nov 4, 2020 · 1 comment
Closed

Training Loss Not Improving #6

sherrychen127 opened this issue Nov 4, 2020 · 1 comment

Comments

@sherrychen127
Copy link

Hi, I'm trying to reproduce your results by training on ucf101. I noticed that my loss is not improving at all. I trained with a batch size of 20, and the loss is stuck at around 15.2 for every iteration, and does not decrease. Just wondering what's the configuration that you used to train the model?

@BestJuly
Copy link
Owner

BestJuly commented Nov 6, 2020

Hi, @sherrychen127. Thank you for your interest.

The configurations are set as default. Have you ever tried our default settings?

For default settings, we have fixed all random seed so that you should achieve exactly the similar results as ours if you use networks such as C3D, R21D. For R3D, the results in our paper (based on R3D) were trained without fixing all random seed, but they should be very similar.
Another thing is that if you use a different cuda version, there might be some difference in results. (Or the same, I am not sure and have not tried different experimental environment)

If there is still problems with your training process, please pose your training settings here and I can try with your settings to check whether something wired happens in my experimental environment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants