You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With the script you provided, I successfully reproduced the L2P result (of Split CIFAR-100 dataset) without replay.
I am now trying to reproduce the results with replays, but even using replay buffer storing 50 samples/class, I got much lower results (acc 80.10/forgetting 9.13) compared to the reported ones (acc 86.31/forgetting 5.83).
It doesn't make sense that using replay leads to higher forgetting. I guess I might miss something.
Since there is no examples how to use replays in given configuration file (i.e., cifar100_l2p.py), I added some lines to handle replays as below.
Hi, sorry for the late reply! Many thanks for pointing out this issue and this is a good catch! Actually when using L2P with replay, it will be better to unfreeze the fully model (which is different from freezing parts for the pure L2P without replay). My intuition is that buffered samples can better refine the feature extraction part of the model for the downstream task. We will make it clearer to avoid future confusion.
Hello, were you able to reproduce the results?
What is a review-trick? I want to implement buffer replay in pytorch and can't find any details in paper nor anywhere else.
How will the class masking work with buffer replay?
Hi, thanks for the great work!
With the script you provided, I successfully reproduced the L2P result (of Split CIFAR-100 dataset) without replay.
I am now trying to reproduce the results with replays, but even using replay buffer storing 50 samples/class, I got much lower results (acc 80.10/forgetting 9.13) compared to the reported ones (acc 86.31/forgetting 5.83).
It doesn't make sense that using replay leads to higher forgetting. I guess I might miss something.
Since there is no examples how to use replays in given configuration file (i.e., cifar100_l2p.py), I added some lines to handle replays as below.
I guess
review_trick
is for fine-tuning the model with balanced dataset.Strangely, when I set
review_trick=True
, I got much lower result (especially, very low learning accuracy).And when I set
review_trick=False
, then the model is kept updating with replays, but still it shows much low accuracy (acc 80.10/forgetting 9.13)Do you have any advice on what I am missing or where to modify in your code?
Or can you share the correct configuration files to reproduce the result of Table 1 using replay buffer?
Thank you.
The text was updated successfully, but these errors were encountered: