Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

coil fixed memory #4

Closed
qsunyuan opened this issue Mar 25, 2022 · 8 comments
Closed

coil fixed memory #4

qsunyuan opened this issue Mar 25, 2022 · 8 comments

Comments

@qsunyuan
Copy link

Amazing toolbox!!!

I got a question about ur results of coil.

In your work. Section 5.2

Since all compared methods are exemplar-based, we fix an equal number of exemplars for every method, i.e., 2,000 exemplars for CIFAR-100 and ImageNet100, 20,000 for ImageNet-1000. As a result, the picked exemplars per class is 20, which is abundant for every class.

I just wanna check the replay size with fixed memory of 2,000 in totoal over training process, which means that the "fixed_memory" in json file is set false, as shown in this link. I'm a little bit confused about this setting due to there are different protocols in recent community.

"fixed_memory": false,

The reason why I came corss this issues is:

1648192236(1)

As shown in this table, the icarl results of 10 steps is reported about 61.74, which is lower than that in the original paper of about 64.

Hope to get ur replay early. THX in advance.

@zhoudw-zdw
Copy link
Collaborator

Thanks for your interest.

The results of iCaRL in Table 1 are reproduced with BCE loss, following the official implementation.

When preparing the toolbox, we try different parameter combination and loss terms, and switch the BCE loss with CE loss. It turns out our choice helps to improve the performance of iCaRL.

Feel free to reopen it if you have more questions.

@qsunyuan
Copy link
Author

sorry to bother you again, in your cub200 or cub100 setting, did you use the pretrain imagenet resnet18?

I did the exemperiment over cub dataset, but my results is very poor.

I checked the paper link, I found they used the pretrained model to finetune for class-incremental learning.

@zhoudw-zdw
Copy link
Collaborator

Yes, pretrain is needed for CUB.

@qsunyuan
Copy link
Author

qsunyuan commented Apr 5, 2022

Hi,

I achevied the similar results of ur coil on CUB200 according to Figure 4(h) in the first 4 tasks. (I tried the Fixed Total Memory Setting, as most equally class incremental methods use this protocol)

In your paper:

Correspondingly, we also conduct the experiment on CUB-100/200 with rare exemplars, i.e., we only save three exemplars per class.

Fixed Total Memory Setting:

Does it mean that I save 600 (200 * 3) samples in total and then as the task increases, the number of samples per class decreases. For example, 30(600/20) exemplers per class in 2nd task; 15(600/40) exemplers per class in 2nd task; And finally, 3 imgs per classes?

Fixed imgs per class Memory Setting:

Or just fixed the memory 3 imgs per classes from the begining.

What exactly is your replay memory method (Fixed total memory?)

Thx in advance.

@zhoudw-zdw
Copy link
Collaborator

Hi, maybe you should read iCaRL [1] first, where you can build the basic idea of exemplars. Our implementation is based on it, see here.

[1] iCaRL: Incremental Classifier and Representation Learning

@qsunyuan
Copy link
Author

qsunyuan commented Apr 5, 2022

Thx for your quick replay, my issue seems unclear.

Sry.

I just wanna check your CUB200 experiment settings.

#2 (comment)

memory_size or memory_per_class? (600 in total or 3 per classes)

Following up on the link you provided, it should be "memory_size" (CIFAR100 settings)

https://github.com/zhoudw-zdw/MM21-Coil/blob/f4ebcc15cb21126c1367d4481d25e8ed0689e20f/models/COIL.py#L102

@zhoudw-zdw
Copy link
Collaborator

Should be the former one.

@qsunyuan
Copy link
Author

qsunyuan commented Apr 6, 2022

Thank you for your patient explanation.

Have a good day!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants