Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reproduce iCarl results #4

Closed
AndreaCossu opened this issue Oct 30, 2021 · 10 comments · Fixed by #12
Closed

Reproduce iCarl results #4

AndreaCossu opened this issue Oct 30, 2021 · 10 comments · Fixed by #12
Assignees

Comments

@AndreaCossu
Copy link
Contributor

Link to the paper

@AntonioCarta
Copy link
Contributor

This should be enough: https://github.com/ContinualAI/avalanche/blob/master/examples/icarl.py

@rudysemola
Copy link
Contributor

I gladly try it

@AndreaCossu AndreaCossu linked a pull request Nov 20, 2021 that will close this issue
@qsunyuan
Copy link

Hi! sorry to interrupt you.

I reproduced iCarl results (default args), but I cannot achieve the trarget results of 0.62 in this link, even I tried some different random seed.

I would like to know where did the value of avearge-acc 0.62 come from?

https://github.com/ContinualAI/reproducible-continual-learning/blob/eafcb218c08d6e6234799d0171b4a19bd6ac1d89/strategies/target_results.csv#L18

I also checked the original papaer. The last exp top1-test-acc is about 0.49 (amazing!)

image

I tried my own implementation code and the avalanche library. I can't reach such high results. Could u pls give some insights? I'm so confused right now.

Hope to get ur early replay. Thx.

@AndreaCossu
Copy link
Contributor Author

AndreaCossu commented Jan 13, 2022

Hi @qsunyuan , can you please open a new issue? @rudysemola maybe you can help answering?

@qsunyuan
Copy link

Thx for ur quick replay. I will open a new issue.

@rudysemola
Copy link
Contributor

Hi @qsunyuan.
If I remember well, the last result in the plot shown to you is the accuracy achieved for the last experience (0.49).
The metric used here is the average incremental accuracy defined by the authors in Section 4. (Experiments, Benchmark protocol part).
If you see the image, taken from the paper (Table 1 a), the result achieved in the paper achieved is 0.641 for 10 classes using this definition of the metric, not 0.49.
0.62 was achieved using the same definition in the paper.

image

@rudysemola
Copy link
Contributor

If you want another similar code to reproduce the results in the paper (near to 0.641) you can use this old code, is basically the same but you never know maybe can work and help you ;)

https://github.com/rudysemola/reproducible-continual-learning/blob/024d26585dc9b3917d8827be376b57dd3b1eaa40/strategies/iCARL/experiment.py

@qsunyuan
Copy link

Thx for ur help. It really helps me a lot. I will try the codes right now.
Thx again. Best wishes. Have a good day.

@qsunyuan
Copy link

I tried ur link @rudysemola and this one https://github.com/ContinualAI/avalanche/blob/master/examples/icarl.py

I achieved a result of about 0.62.

Unfortunately I also did not achieve the result of 0.64.

@AndreaCossu
Copy link
Contributor Author

Hi :) Minor changes in the performance with respect to the original paper are often due to slightly different training modalities (e.g. learning rate scheduler) which are not always easy to investigate or are not disclose in the paper. Therefore, as a policy, we allow for a 2% slack in accuracy during tests. Our target result for iCarl is 0.62. If you manage to close this gap, please let us know!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants