New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance of resnet101 #25
Comments
@kleinzcy , hope you do not mind I ask you a question since I encountered the same problem you mentioned in the beginning: during training, the training and validation loss do not change much from epoch 1 to epoch 120, fluctuate around 0.3 as shown below, (INFO) 2021-02-09 06:50:46: Epoch 99, training loss 0.319589: 96%|########################## | 512/531 [05:30<00:12, 1.56it/s] When you reported your result is as good as reported in the paper and even better, did you observe the loss reduced below 0.3 a lot? How did you solve the problem? Thanks. |
@MaitaYuki Sorry for the late reply. I am on a long holiday. Loss fluctuates around 0.3 because of normalized focal loss. You can look at its formulation. |
@kleinzcy @MaitaYuki Did you remember how many time will 120 epoch takes (and how many gpus you used) |
Hi, authors, thanks to your nice paper and code!
Recently, I retrain the resnet101 model on your code. But my result is not as good as reported in the paper. I have read the issue but did not find any helpful information.
My environment: Ubuntu16.04, CUDA10.1, pytorch1.3.0, four TITAN XP GPU
My results(NoBRS, last checkpoints, NFL):
results after f-BRS-B:
And, the training curve is strange:
The training loss is growing or constant(change slightly). Do you have any idea?
Thanks.
The text was updated successfully, but these errors were encountered: