-
Notifications
You must be signed in to change notification settings - Fork 129
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inquiry about Pre-trained Model & Parameter Setup #5
Comments
Thanks for your interest. For the first problem, I guess you might confuse the average accuracy with the last session's accuracy. We report the average accuracy of all sessions, which is also the benchmark in most class-incremental learning papers, while your table seems to be the accuracy of the last stage. For the second problem, different methods have different suitable parameters. For example, lwf and ewc may need smaller weight decay while icarl requires a larger one. We set slightly different parameters between different methods to fully reflect the performance of these methods. |
Thanks for your answer! Could I confirm my understanding again for the first question because it does differ from another continual learning library I am working on : ) For my first question, is that means: |
You're right. |
Got it. Thanks. |
Many thanks for this wonderful framework! It really helps our work a lot!
I have some questions about your experiment setup.
I followed all the bash files and parameters you have set up, but the results seem to be much lower than yours. Is that because you use an ImageNet pre-trained ResNet? Thanks!
Thanks in advance for your answer!
The text was updated successfully, but these errors were encountered: