You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Have you tested whether the attack is still effective after using 5% of clean data to fintune the backdoor model?
I finetune the backdoor model on the 5% clean training data for 10 epochs using the SGD optimizer. According to the results, it can be seen that this strategy is defensive against wanet. Have you tested it? The pretrained model that you provided is used.
The text was updated successfully, but these errors were encountered:
You are right indeed. Catastrophic forgetting is the major limitation of our paper, and many other works in backdoor attacks as well. Actually, that is the problem we are working on right now.
Hi! Thanks for your sharing! This attack is cool!
Have you tested whether the attack is still effective after using 5% of clean data to fintune the backdoor model?
I finetune the backdoor model on the 5% clean training data for 10 epochs using the SGD optimizer. According to the results, it can be seen that this strategy is defensive against wanet. Have you tested it? The pretrained model that you provided is used.
The text was updated successfully, but these errors were encountered: