-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The pretrained teacher and hyper-parameters on CIFAR-100 #2
Comments
Thanks for your attention. To keep consistency with ImageNet experiments, Cifar-100 experiments are also run on Overhaul repo(https://github.com/clovaai/overhaul-distillation). As described in our paper, set the training settings the same with CRD. The loss implementation is the same as that on ImageNet. Set alpha to 2.25 and T to 4 as described Sec 5. The pretrained teachers are re-trained on Overhaul using the same training settings as CRD. Note results are averaged over 5 runs for Cifar-100. |
Thank you very much for the quick response. Now I guess the reason is the teachers. The teachers that I previously used were downloaded from CRD. I will re-train the teacher on Overhaul using the CRD training setting. Thanks! |
Have you reproduced the results? |
No, I have not. How about you? I did not find much performance difference with the original KD. |
So do I.
Me too, no difference from the original KD. It feels like bullshit. This kind of paper is highly packaged. The essence is to attenuate the teacher's KD term when the teacher is not very accurate. This idea is too simple. It's not likely to work either experimentally or theoretically. So it's not worth our time to study. |
Yeah, the main idea of the method is the weight. Previously I was just curious about how CE+KL loss with an adaptive weight could achieve such a good performance. The author said they retrained the teacher. Probably the results can only be reproduced with their pretrained teachers. So just move on. |
@summertaiyuan @VelsLiu 2、I totally disagree with the point ' This idea is too simple. It's not likely to work either experimentally or theoretically'. |
I sincerely apologize to you, I reproduce your results tonight. withdraw the apology |
Hi, thanks for the interesting work. I am trying to reproduce the results on CIFAR-100, but failed. I have some questions about the implementation on CIFAR-100. I will appreciate it if you can provide some suggestions. Specifically, is the training loss implementation on CIFAR-100 the same as that on ImageNet, except$\alpha$ is set to 2.25 and T is set to 4? are the pretrained and fixed teachers that are used in the experiments the same as those in CRD? Thank you in advance!
The text was updated successfully, but these errors were encountered: