Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

'BoundTranspose' object has no attribute 'lower' #23

Closed
Cli212 opened this issue Mar 25, 2022 · 2 comments
Closed

'BoundTranspose' object has no attribute 'lower' #23

Cli212 opened this issue Mar 25, 2022 · 2 comments

Comments

@Cli212
Copy link

Cli212 commented Mar 25, 2022

Hi,

Thanks for your great work first!

I'm trying to run auto_LiRPA on an MLP based model, but I have run into some problems. I tried to minimize the code I have used and here is the code that I can reproduce the issue:

import torch.nn as nn
import torch.nn.functional as F
import numpy as np
from auto_LiRPA import BoundedModule, BoundedTensor, PerturbationLpNorm

# Define model architecture

class SetConv(nn.Module):
    def __init__(self, sample_feats, hid_units):
        super(SetConv, self).__init__()
        self.sample_mlp1 = nn.Linear(sample_feats, hid_units)
        self.sample_mlp2 = nn.Linear(hid_units, hid_units)

    def forward(self, samples, sample_mask):
        # samples has shape [batch_size x num_joins+1 x sample_feats]
        hid_sample = F.relu(self.sample_mlp1(samples))
        hid_sample = F.relu(self.sample_mlp2(hid_sample))
        hid_sample = hid_sample * sample_mask  # Mask
        hid_sample = torch.sum(hid_sample, dim=1, keepdim=False)
        sample_norm = sample_mask.sum(1, keepdim=False)
        hid_sample = hid_sample / sample_norm  # Calculate average only over non-masked parts
        return hid_sample
model =SetConv(3, 10)
samples = torch.rand((1,2,3), requires_grad=True)
sample_mask = torch.rand((1,2,1), requires_grad=True)
bounded_model = BoundedModule(model, (torch.zeros_like(samples), torch.zeros_like(sample_mask)))
bounded_model.eval()
ptb = PerturbationLpNorm(norm=np.inf, eps=0.1)
my_input = (BoundedTensor(samples, ptb), BoundedTensor(sample_mask, ptb))
outputs = bounded_model(my_input)
lb, ub = bounded_model.compute_bounds(x=(my_input,), method="CROWN")

The error goes as "AttributeError: 'BoundTranspose' object has no attribute 'lower'" and it happens in the last line of the code when I try to compute the bound. I have tried to debug this issue but couldn't find any way to fix that. Could you take a look at this any time you feel comfortable?

Thanks!

shizhouxing added a commit that referenced this issue Mar 25, 2022
@shizhouxing
Copy link
Member

Hi @Cli212 , thanks for raising this issue, and it has been fixed on the latest master branch.

Also, since your input looks like sequence data (or similar input shape), and auto_LiRPA's default "patches" mode has issues with that right now, please add the following option (bound_opts) to disable the patches mode for now:

bounded_model = BoundedModule(model, (torch.zeros_like(samples), torch.zeros_like(sample_mask)),
    bound_opts={'conv_mode': 'matrix'})

@Cli212
Copy link
Author

Cli212 commented Mar 25, 2022

Great to know that! Thanks for your excellent work!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants