-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
the generated adv example as the same original images #5
Comments
I found the same problem. My solution is to send the copy of the input (input.clone()) when calling the run function. |
|
其实很明显,你只要在optimizer里不停打印input_adv和input_var就能看出每次更新,input_var也会被更新。我还没细究原因,目前看,可能是直接用input_var的话, pytorch内部机制导致一些关于grad的地方没有真正被关掉。我尝试在使用input_var的地方先clone一份出来,然后发现,input就不变了。 |
input中的原始图片也会更新?? |
对,最终input_var和input_adv变成数值上一样的tensor,那dist自然是0,loss2就是0。然后input_adv肯定能让model分类错误,那loss1也是0。这样下来,最终就全是0了。 你想想,什么情况会导致loss是0,一步步推,打印输出,就能发现问题。 |
提供的这个方法是不是就没有成功的进行攻击。 |
Although the issue exists, CW attack in this repo still works. |
It might be because in https://github.com/rwightman/pytorch-nips2017-attack-example/blob/master/attacks/attack_iterative.py#L78 |
I have tried the attack, but it seems that there is no difference between the adv example and the original image. Maybe the attack failed. the file of run_attack_cwl2.py can not achieve the attack function.
The text was updated successfully, but these errors were encountered: