You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a question about your implementation of LwF, it seems that there is a big gap between the data of LwF reported in your paper and the previous works (iCaRL and End-to-End Incremental Learning).
I also implement the LwF.MC introduced by iCaRL, and my results on CIFAR100 consistent with the data reported by iCaRL, there are also some other implementations, such as PODNet, it seems that PODNet uses the BCELoss, while I use the CrossEntropyLoss instead.
I read your code, the problem may lie in the loss in LwF.MC.
The text was updated successfully, but these errors were encountered:
Sorry, I wrongly think the data reported in the paper belongs to LwF.MC. In addition, I found that ft classification is actually performed on all 100 classes, the final performance reduces from 45% to 42% (10000 inversion iterations) after correcting the code.
Nice work!
I have a question about your implementation of LwF, it seems that there is a big gap between the data of LwF reported in your paper and the previous works (iCaRL and End-to-End Incremental Learning).
I also implement the LwF.MC introduced by iCaRL, and my results on CIFAR100 consistent with the data reported by iCaRL, there are also some other implementations, such as PODNet, it seems that PODNet uses the BCELoss, while I use the CrossEntropyLoss instead.
I read your code, the problem may lie in the loss in LwF.MC.
The text was updated successfully, but these errors were encountered: