Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

L2 distance between adversarial example and the original input data #4

Open
kkew3 opened this issue Jul 18, 2018 · 0 comments
Open

Comments

@kkew3
Copy link

kkew3 commented Jul 18, 2018

In attacks.AttackCarliniWagnerL2._optimize there's:

if input_orig is None:
    dist = l2_dist(input_adv, input_var, keepdim=False)

The problem is that input_var has already been mapped to tanh-space, so it's in fact not the original input. However, the adversarial example input_adv is the one to be used. Therefore, without mapping input_var back to its original space, the dist calculated won't be the true L2 distance between the adversarial example and the original image. In Carlini's code he performed the exact operation. Thanks for checking!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant