apply MIFace attack on different datasets #2057
-
Beta Was this translation helpful? Give feedback.
Replies: 11 comments
-
Hi @UnReAlKiNg Thank you for using ART! Would you be able to share an example script that reproduces the images above? |
Beta Was this translation helpful? Give feedback.
-
mi_attack1.ipynb -> MNIST for MNIST, i trained the model(which has the same structure in quick start) separately, and the test acc is 98% |
Beta Was this translation helpful? Give feedback.
-
@UnReAlKiNg What is the output of the model for cifar10: logits or softmax? I think |
Beta Was this translation helpful? Give feedback.
-
thanks for your answer but i noticed that, in your quick start, the example network is like: Step 0: Define the neural network model, return logits instead of activation in forward methodclass Net(nn.Module):
|
Beta Was this translation helpful? Give feedback.
-
@UnReAlKiNg Yes, please try with adding a softmax layer. It's true, the example works with a model that outputs logits. Which one, logits or softmax, to choose depends on the attack algorithm and/or implementation. Some attacks work only efficiently on logits, whereas others expect softmax probabilities. |
Beta Was this translation helpful? Give feedback.
-
ok, but here is another question. the loss function is CrossEntropy, and in loss function it will automatically calculate softmax. if i add a softmax layer after model, may it cause misconvergence? or there are something else i should change? |
Beta Was this translation helpful? Give feedback.
-
@UnReAlKiNg The answer depends on the attack. For |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
That's great news! Three is not guarantee that all models reveal the same amount of information. You could investigate the influence of the attack parameters like learning rate, etc. or investigate different models trained on the same data, but with different hyperparameters, e.g. create a model that over-fits on the training dataset, etc. |
Beta Was this translation helpful? Give feedback.
-
I'll convert this issue into Discussion where we can continue to discuss under the Discussion tab. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
@UnReAlKiNg Yes, please try with adding a softmax layer.
It's true, the example works with a model that outputs logits. Which one, logits or softmax, to choose depends on the attack algorithm and/or implementation. Some attacks work only efficiently on logits, whereas others expect softmax probabilities.