New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Several questions about this article #10
Comments
Hi, A1: Yes, it shares similarities with data poisoning. Best |
Thank you so much! It is really helpful. |
Yes, that is the idea. |
Thanks again. |
Hi, I'm a new one studying on adversarial examples, here I'd like to ask you for some questions.
Q1: Is your scheme based on data poisoning?
Q2: About the formula(2), it is said: "Note that the above bi-level optimization has two components that optimize the same objective. In order to find effective noise δ and unlearnable examples, the optimization steps for θ should be limited, compared to standard or adversarial training. Specifically, we optimize δ over Dc after every M steps of optimization of θ."
Why optimize δ over Dc after every M steps of optimization of θ can help to find effective noise δ ? Does this strategy only work when the two min have a same objective?
Q3: In section4.1, it is said:" However, in the sample-wise case, every sample has a different noise, and there is no explicit correlation between the noise and the label. In this case, only low-error samples can be ignored by the model, and normal and high-error examples have more positive impact on model learning than low-error examples. This makes error-minimizing noise more generic and effective in making data unlearnable."
I know there is no explicit correlation between the noise in the sample-wise case. But why this makes error-minimizing noise more generic and effective in making data unlearnable? What does it mean?
looking forward for your reply ! Thanks !
The text was updated successfully, but these errors were encountered: