New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
coil fixed memory #4
Comments
Thanks for your interest. The results of iCaRL in Table 1 are reproduced with BCE loss, following the official implementation. When preparing the toolbox, we try different parameter combination and loss terms, and switch the BCE loss with CE loss. It turns out our choice helps to improve the performance of iCaRL. Feel free to reopen it if you have more questions. |
sorry to bother you again, in your cub200 or cub100 setting, did you use the pretrain imagenet resnet18? I did the exemperiment over cub dataset, but my results is very poor. I checked the paper link, I found they used the pretrained model to finetune for class-incremental learning. |
Yes, pretrain is needed for CUB. |
Hi, I achevied the similar results of ur coil on CUB200 according to Figure 4(h) in the first 4 tasks. (I tried the Fixed Total Memory Setting, as most equally class incremental methods use this protocol) In your paper: Correspondingly, we also conduct the experiment on CUB-100/200 with rare exemplars, i.e., we only save three exemplars per class. Fixed Total Memory Setting: Does it mean that I save 600 (200 * 3) samples in total and then as the task increases, the number of samples per class decreases. For example, 30(600/20) exemplers per class in 2nd task; 15(600/40) exemplers per class in 2nd task; And finally, 3 imgs per classes? Fixed imgs per class Memory Setting: Or just fixed the memory 3 imgs per classes from the begining. What exactly is your replay memory method (Fixed total memory?) Thx in advance. |
Hi, maybe you should read iCaRL [1] first, where you can build the basic idea of exemplars. Our implementation is based on it, see here. [1] iCaRL: Incremental Classifier and Representation Learning |
Thx for your quick replay, my issue seems unclear. Sry. I just wanna check your CUB200 experiment settings. memory_size or memory_per_class? (600 in total or 3 per classes) Following up on the link you provided, it should be "memory_size" (CIFAR100 settings) |
Should be the former one. |
Thank you for your patient explanation. Have a good day! |
Amazing toolbox!!!
I got a question about ur results of coil.
In your work. Section 5.2
Since all compared methods are exemplar-based, we fix an equal number of exemplars for every method, i.e., 2,000 exemplars for CIFAR-100 and ImageNet100, 20,000 for ImageNet-1000. As a result, the picked exemplars per class is 20, which is abundant for every class.
I just wanna check the replay size with fixed memory of 2,000 in totoal over training process, which means that the "fixed_memory" in json file is set false, as shown in this link. I'm a little bit confused about this setting due to there are different protocols in recent community.
PyCIL/exps/coil.json
Line 6 in 6d2c128
The reason why I came corss this issues is:
As shown in this table, the icarl results of 10 steps is reported about 61.74, which is lower than that in the original paper of about 64.
Hope to get ur replay early. THX in advance.
The text was updated successfully, but these errors were encountered: