Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hyperparameter --lr_finetuning_init and --lr_unlearning_init seems to be not used in ./defense/abl/abl.py #12

Closed
mo666666 opened this issue Mar 31, 2023 · 2 comments

Comments

@mo666666
Copy link

Thank you for your excellent work. I found that the hyperparameters --lr_finetuning_init and --lr_unlearning_init seem to be not used in the ./defense/abl/abl.py. Is that a bug?

@mdzhangst
Copy link
Collaborator

Yes, because the experiment found that unlearn had a high sensitivity to lr, so the lr given in the original text was retained without using hyperparameters. In order not to affect you, we will make changes in later versions.

@mo666666
Copy link
Author

Ok, thank you for your replies!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants