You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the implemenation of fgsm for mnist, you do not clamp the initatial perturbation - meaning you calculate gradient based on out of bounds data points:
This was not intentional, we just forgot to add the clamping for MNIST. You may need to adjust the alpha parameter for training MNIST if you do add the clamping.
In the implemenation of fgsm for mnist, you do not clamp the initatial perturbation - meaning you calculate gradient based on out of bounds data points:
delta = torch.zeros_like(X).uniform_(-args.epsilon, args.epsilon).cuda()
delta.requires_grad = True
output = model(X + delta)
loss = F.cross_entropy(output, y)
This contrasts with the CIFAR implementation, where this clamping is done:
for j in range(len(epsilon)):
delta[:, j, :, :].uniform_(-epsilon[j][0][0].item(), epsilon[j][0][0].item())
delta.data = clamp(delta, lower_limit - X, upper_limit - X)
Is this intended? Why was this choice made?
The text was updated successfully, but these errors were encountered: