Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] FAB is not working #179

Open
mmajewsk opened this issue Mar 30, 2024 · 3 comments
Open

[BUG] FAB is not working #179

mmajewsk opened this issue Mar 30, 2024 · 3 comments
Labels
bug Something isn't working

Comments

@mmajewsk
Copy link

✨ Short description of the bug [tl;dr]

The FAB implementation fails to run core functionality due to the comparison bug

💬 Detailed code and results

acc = self.get_logits(x).max(1)[1] == y
In this line the acc value will be set to torch false, which does not allow for further inference of the value on line
ind_to_fool = acc.nonzero().squeeze()
making the whole if next not run in any case whatsoever

@mmajewsk mmajewsk added the bug Something isn't working label Mar 30, 2024
@rikonaka
Copy link
Contributor

rikonaka commented Mar 31, 2024

Hi @mmajewsk , there is actually no problem running the demo code 😉, can you provide a copy of the code that will error out so I can test it?

image

torch                     2.2.0
torchaudio                2.2.0
torchdiffeq               0.2.3
torchvision               0.17.0

I just checked the FAB code (I'm not the author of FAB) and found a large number of type correspondence errors in the FAB code. For example: acc_curr here is torch bool type tensor, but comparing it to 0.

error

@mmajewsk
Copy link
Author

Hi @mmajewsk , there is actually no problem running the demo code 😉, can you provide a copy of the code that will error out so I can test it?

image

torch                     2.2.0
torchaudio                2.2.0
torchdiffeq               0.2.3
torchvision               0.17.0

I just checked the FAB code (I'm not the author of FAB) and found a large number of type correspondence errors in the FAB code. For example: acc_curr here is torch bool type tensor, but comparing it to 0.

error

I highly recommend copying and pasting the code, as then I can copy and paste the code to test it myself, which I cannot do with the images.

This bug does not produce error code.

I see that the reason why I couldnt work with the code was that in my case I was not feeding the method with the actual output of the run of the images on the model. Which is confusing when it comes to this API. Since this requires input that as well could be taken from the model itself, by feeding the input. Why does atk() requires second input then?

In other attack methods, when the labels are not matching, it works fine.

How this works if the labels are not matching.

def perturb(self, x, y):
    # here x is an image and y is per your example: tensor([1, 1, 1, 1, 1], device='cuda:0')
    adv = x.clone()
    with torch.no_grad():
        acc = self.get_logits(x).max(1)[1] == y
        # so by this comparison, in the first run the self.get_logits(x).max(1)[1] is precisesly tensor([3, 8, 8, 0, 6], device='cuda:0')
        # as the model is unchanged
        # therefore acc is tensor([False, False, False, False, False], device='cuda:0')
        startt = time.time()

        torch.random.manual_seed(self.seed)
        torch.cuda.random.manual_seed(self.seed)

        def inner_perturb(targeted):
            for counter in range(self.n_restarts):
                ind_to_fool = acc.nonzero().squeeze()
                # so then this becomes: tensor([], device='cuda:0)
                if len(ind_to_fool.shape) == 0:
                    ind_to_fool = ind_to_fool.unsqueeze(0)
                # so then this fails to run
                if ind_to_fool.numel() != 0:
                    x_to_fool, y_to_fool = (
                        x[ind_to_fool].clone(),
                        y[ind_to_fool].clone(),

@rikonaka
Copy link
Contributor

rikonaka commented Mar 31, 2024

The torchattacks just need to input an images and labels, I don't quite understand what you mean

feeding the method with the actual output of the run of the images on the model

and

In other attack methods, when the labels are not matching, it works fine.

labels are groud truth labels from the dataset, not the predictions of the model. I have rewritten the code related to the FAB attack, although the previous code worked, there were a lot of problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants