[PyTorch] Changing layers from torch.nn.functional to torch.nn affects attack accuracy. #1428
Replies: 1 comment 17 replies
-
Hi @KarthikGanesan88 That's an interesting observation, but I don't think it is caused by your new model or ART. Running the original script multiple times usually results in adversarial accuracies around 35%, but sometimes results in values as low as 16% or as high as 50%. A possible reason could be that the model gets initialised and trained slightly different in each run. It could also be that the dataset MNIST is rather sensitive to small differences, maybe we should repeat the experiments with CIFAR10. |
Beta Was this translation helpful? Give feedback.
-
Hi, I am using PyTorch with ART and if I swap the layers in my model from
torch.nn.functional
versions to thetorch.nn
versions, I am seeing big differences in attack accuracy. For example, in the get_started_pytorch.py file, when I run it as is, I get:However, if I change the network to be this instead:
My results change to:
As far as I am aware, both of these networks should work the exact same way, but PyTorch recommends using the
torch.nn
versions instead of calling thetorch.nn.functional
functions directly. Is there something in ART that requiresrelu
andmaxpool2d
to be called as functions during forward but not have them as layers in the model?I also tried training both of these nets with PyTorch directly and got the same classification accuracy of
98%
each time.I am only showing the attack success change for
FGSM
but I see similar discrepancies forPGD, HopSkipJump, CarliniL2 and SaliencyMap
.Beta Was this translation helpful? Give feedback.
All reactions