Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Several questions about this article #10

Closed
yyyliaQ opened this issue Mar 24, 2022 · 4 comments
Closed

Several questions about this article #10

yyyliaQ opened this issue Mar 24, 2022 · 4 comments

Comments

@yyyliaQ
Copy link

yyyliaQ commented Mar 24, 2022

Hi, I'm a new one studying on adversarial examples, here I'd like to ask you for some questions.
Q1: Is your scheme based on data poisoning?

Q2: About the formula(2), it is said: "Note that the above bi-level optimization has two components that optimize the same objective. In order to find effective noise δ and unlearnable examples, the optimization steps for θ should be limited, compared to standard or adversarial training. Specifically, we optimize δ over Dc after every M steps of optimization of θ."
Why optimize δ over Dc after every M steps of optimization of θ can help to find effective noise δ ? Does this strategy only work when the two min have a same objective?

Q3: In section4.1, it is said:" However, in the sample-wise case, every sample has a different noise, and there is no explicit correlation between the noise and the label. In this case, only low-error samples can be ignored by the model, and normal and high-error examples have more positive impact on model learning than low-error examples. This makes error-minimizing noise more generic and effective in making data unlearnable."
I know there is no explicit correlation between the noise in the sample-wise case. But why this makes error-minimizing noise more generic and effective in making data unlearnable? What does it mean?

looking forward for your reply ! Thanks !

@HanxunH
Copy link
Owner

HanxunH commented Mar 27, 2022

Hi,
Thanks for your interest in our work.

A1: Yes, it shares similarities with data poisoning.
A2: Because both terms are optimized towards the same objective, if the model's optimized weight can minimize the objective, there is no need for the unlearnable noise to minimize the objective. This has also been discussed in another issue.
A3: Compared to class-wise, it is hard to detect such noise. Class-wise, noise is more easily to be detected since all samples share the same noise. An averaging of these samples could expose the noise. From the practical perspective, sample-wise noise is more effective.

Best

@yyyliaQ
Copy link
Author

yyyliaQ commented Mar 27, 2022

Thank you so much! It is really helpful.
The goal of error-minimizing noise is to reduce the error of training examples close to zero. Since the training loss is close to zero, the modified samples will not contribute to the updating of the parameters of model and the model will be considered to have been trained well.
Did I get that right?

@HanxunH
Copy link
Owner

HanxunH commented Mar 27, 2022

Yes, that is the idea.

@yyyliaQ
Copy link
Author

yyyliaQ commented Mar 28, 2022

Thanks again.

@HanxunH HanxunH closed this as completed Sep 16, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants