You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your excellent work. I found that the hyperparameters --lr_finetuning_init and --lr_unlearning_init seem to be not used in the ./defense/abl/abl.py. Is that a bug?
The text was updated successfully, but these errors were encountered:
Yes, because the experiment found that unlearn had a high sensitivity to lr, so the lr given in the original text was retained without using hyperparameters. In order not to affect you, we will make changes in later versions.
Thank you for your excellent work. I found that the hyperparameters --lr_finetuning_init and --lr_unlearning_init seem to be not used in the ./defense/abl/abl.py. Is that a bug?
The text was updated successfully, but these errors were encountered: