You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your excellent work and in-depth analysis of the iterative training paradigm. I am currently having difficulty in reproducing the results of the experiments provided in the paper. If I run the model with the arguments (LLF mentioned in the repo) 'reset_layer_name as layer 4', I am getting around 71.37(N10) as final test accuracy in CUB dataset with smth (0.1) at the end of 10 generations. This value is around 1 percent less than the mentioned value of 72.47% (N10) (table 1). The accuracy at the end of the 3rd generation (N3) I am getting is 68.07% which is 2.7% less compared to 70.76% (Table 1). Have you resetted from block 3 or only block 4 for your LLF experiments? Any help regarding this would be better. Also in Table-1, you have mentioned LLF uses L={10,14}, corresponding to blocks 3 and 4 in ResNet18. Does that mean the results provided in Table 1 are a result of forgetting layers in block 3, block 4, and FC layer? Please provide some clarity on that.
Thank you!
The results of my reproducibility experiments are shown below with respect to each generation on CUB dataset with Label smoothening (0.1).
gen | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10
last_tst_acc1 | 59.37 | 65.21 | 68.07 | 68.55 | 70.06 | 70 | 70.54 | 70.52 | 70.3 | 70.09 | 71.37
The text was updated successfully, but these errors were encountered:
Hi, sorry for the late reply! I believe CUB uses reset_layer_name as layer3, and uses a learning rate of 0.1. Also note that the 3rd generation equals to 4 rounds of training (the initial training + 3 generations of reset). You may be doing this already but just wanted to note that if not.
Hello @hlml @ankitkv,
Thank you for your excellent work and in-depth analysis of the iterative training paradigm. I am currently having difficulty in reproducing the results of the experiments provided in the paper. If I run the model with the arguments (LLF mentioned in the repo) 'reset_layer_name as layer 4', I am getting around 71.37(N10) as final test accuracy in CUB dataset with smth (0.1) at the end of 10 generations. This value is around 1 percent less than the mentioned value of 72.47% (N10) (table 1). The accuracy at the end of the 3rd generation (N3) I am getting is 68.07% which is 2.7% less compared to 70.76% (Table 1). Have you resetted from block 3 or only block 4 for your LLF experiments? Any help regarding this would be better. Also in Table-1, you have mentioned LLF uses L={10,14}, corresponding to blocks 3 and 4 in ResNet18. Does that mean the results provided in Table 1 are a result of forgetting layers in block 3, block 4, and FC layer? Please provide some clarity on that.
Thank you!
The results of my reproducibility experiments are shown below with respect to each generation on CUB dataset with Label smoothening (0.1).
gen | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10
last_tst_acc1 | 59.37 | 65.21 | 68.07 | 68.55 | 70.06 | 70 | 70.54 | 70.52 | 70.3 | 70.09 | 71.37
The text was updated successfully, but these errors were encountered: