Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

 The pretrained teacher and hyper-parameters on CIFAR-100 #2

Closed
VelsLiu opened this issue Apr 15, 2021 · 8 comments
Closed

 The pretrained teacher and hyper-parameters on CIFAR-100 #2

VelsLiu opened this issue Apr 15, 2021 · 8 comments

Comments

@VelsLiu
Copy link

VelsLiu commented Apr 15, 2021

Hi, thanks for the interesting work. I am trying to reproduce the results on CIFAR-100, but failed. I have some questions about the implementation on CIFAR-100. I will appreciate it if you can provide some suggestions. Specifically, is the training loss implementation on CIFAR-100 the same as that on ImageNet, except $\alpha$ is set to 2.25 and T is set to 4? are the pretrained and fixed teachers that are used in the experiments the same as those in CRD? Thank you in advance!

@woshichase
Copy link

Thanks for your attention. To keep consistency with ImageNet experiments, Cifar-100 experiments are also run on Overhaul repo(https://github.com/clovaai/overhaul-distillation). As described in our paper, set the training settings the same with CRD. The loss implementation is the same as that on ImageNet. Set alpha to 2.25 and T to 4 as described Sec 5. The pretrained teachers are re-trained on Overhaul using the same training settings as CRD. Note results are averaged over 5 runs for Cifar-100.

@VelsLiu
Copy link
Author

VelsLiu commented Apr 15, 2021

Thank you very much for the quick response. Now I guess the reason is the teachers. The teachers that I previously used were downloaded from CRD. I will re-train the teacher on Overhaul using the CRD training setting. Thanks!

@summertaiyuan
Copy link

Thank you very much for the quick response. Now I guess the reason is the teachers. The teachers that I previously used were downloaded from CRD. I will re-train the teacher on Overhaul using the CRD training setting. Thanks!

Have you reproduced the results?

@VelsLiu
Copy link
Author

VelsLiu commented May 19, 2021

Thank you very much for the quick response. Now I guess the reason is the teachers. The teachers that I previously used were downloaded from CRD. I will re-train the teacher on Overhaul using the CRD training setting. Thanks!

Have you reproduced the results?

No, I have not. How about you? I did not find much performance difference with the original KD.

@summertaiyuan
Copy link

summertaiyuan commented May 19, 2021

So do I.

Thank you very much for the quick response. Now I guess the reason is the teachers. The teachers that I previously used were downloaded from CRD. I will re-train the teacher on Overhaul using the CRD training setting. Thanks!

Have you reproduced the results?

No, I have not. How about you? I did not find much performance difference with the original KD.

Me too, no difference from the original KD. It feels like bullshit.

This kind of paper is highly packaged. The essence is to attenuate the teacher's KD term when the teacher is not very accurate. This idea is too simple. It's not likely to work either experimentally or theoretically. So it's not worth our time to study.

@VelsLiu
Copy link
Author

VelsLiu commented May 19, 2021

So do I.

Thank you very much for the quick response. Now I guess the reason is the teachers. The teachers that I previously used were downloaded from CRD. I will re-train the teacher on Overhaul using the CRD training setting. Thanks!

Have you reproduced the results?

No, I have not. How about you? I did not find much performance difference with the original KD.

Me too, no difference from the original KD. It feels like bullshit.

This kind of paper is highly packaged. The essence is to attenuate the teacher's KD term when the teacher is not very accurate. This idea is too simple. It's not likely to work either experimentally or theoretically. So it's not worth our time to study.

Yeah, the main idea of the method is the weight. Previously I was just curious about how CE+KL loss with an adaptive weight could achieve such a good performance. The author said they retrained the teacher. Probably the results can only be reproduced with their pretrained teachers. So just move on.

@woshichase
Copy link

@summertaiyuan @VelsLiu
1、We have already responsed to how to reproduce the results on Cifar100. It's more convincing to validate the idea on large-scale dataset, such as ImageNet. So to keep consisitency with ImageNet repo, we also run Cifar100 on Overhaul repo and retrain all the models(including teacher) using exactly the same settings as CRD. We are currently on a tight program schedule and you can refer to the attached files which are the training logs downloaded from our training cluster.
log_cifar.zip

2、I totally disagree with the point ' This idea is too simple. It's not likely to work either experimentally or theoretically'.
Method's effectiveness should not be tied up with its complexity. The work 'Focal Loss' [1] designs a concise and uncomplicated loss to effectively focus on hard samples and prevent the easy samples from overwhelming training. The idea of our work came up two years ago during one of our projects. Simple though it might be, one can spot its effectiveness if he run our released code on ImageNet, which is a more convincing dataset to validate.
[1]Lin T Y, Goyal P, Girshick R, et al. Focal loss for dense object detection[C]//Proceedings of the IEEE international conference on computer vision. 2017: 2980-2988.

@summertaiyuan
Copy link

summertaiyuan commented May 19, 2021

@summertaiyuan @VelsLiu
1、We have already responsed to how to reproduce the results on Cifar100. It's more convincing to validate the idea on large-scale dataset, such as ImageNet. So to keep consisitency with ImageNet repo, we also run Cifar100 on Overhaul repo and retrain all the models(including teacher) using exactly the same settings as CRD. We are currently on a tight program schedule and you can refer to the attached files which are the training logs downloaded from our training cluster.
log_cifar.zip

2、I totally disagree with the point ' This idea is too simple. It's not likely to work either experimentally or theoretically'.
Method's effectiveness should not be tied up with its complexity. The work 'Focal Loss' [1] designs a concise and uncomplicated loss to effectively focus on hard samples and prevent the easy samples from overwhelming training. The idea of our work came up two years ago during one of our projects. Simple though it might be, one can spot its effectiveness if he run our released code on ImageNet, which is a more convincing dataset to validate.
[1]Lin T Y, Goyal P, Girshick R, et al. Focal loss for dense object detection[C]//Proceedings of the IEEE international conference on computer vision. 2017: 2980-2988.

I sincerely apologize to you, I reproduce your results tonight.

withdraw the apology

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants