Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bound backward error of BoundMaxPool layer #5

Closed
persistz opened this issue May 2, 2022 · 4 comments
Closed

Bound backward error of BoundMaxPool layer #5

persistz opened this issue May 2, 2022 · 4 comments

Comments

@persistz
Copy link

persistz commented May 2, 2022

I trained several adv trained models based on the origin github repo of Madry and Trades, even with the defination in alpha-beta-CROWN/complete_verifier/model_defs.py.
When I use robustness_verifier.py to validate these models, an same error raised for all of them.

The error caused by the code assert type(last_lA) == torch.Tensor or type(last_uA) == torch.Tensor in auto_LiRPA/operators/convolution.py -> class BoundMaxPool -> function bound_backward.

When I check the type of last_lA, I found it is patches not tensor, so cannot pass the assert check.

I then checked the demos in exp_configs, I want to use a model and config provided by you to reproduce the error but I found unfortunately that it seems no model contains the MaxPool layer. If you need me to provide a model for reproducing this problem, I will be happy to do so.

@Melcfrn
Copy link

Melcfrn commented Jun 21, 2022

Hi @persistz,
I am not a developer from this repository but I encountered the same error as you. When you take the first element of last_lA, it is a Tensor. So I tried to change the assert by taking last_lA[0] but there are other problems after with last_A that is too a Patch and hasn't attributes that we want apply to it.
I believe there is somewhere something to do, to convert Patches but I didn't find it.

@huanzhang12
Copy link
Member

@persistz @Melcfrn Apologize for my late response. This is a known limitation of our current implementation, and so far we only support one maxpooling layer and a linear layer must be used after the maxpooling layer.

If your architecture does not meet this requirement, you can probably make it work by disabling Patch. If you network is small, it might work. You can do this by changing the conv_mode option to matrix, see https://github.com/huanzhang12/alpha-beta-CROWN/blob/8ed285c08becec7c0ac015731287f41b1a0f4861/docs/robustness_verifier_all_params.yaml#L4

We are working on a more general support for maxpooling layers, but this is not a top priority at this point so progress can be a bit slow. I will let you know when it is ready. Thank you.

@huanzhang12
Copy link
Member

@persistz @Melcfrn The latest version of alpha-beta-CROWN and auto_LiRPA has better maxpool support (especially, conv_mode: patches to efficiently support Maxpool neurons), and you can try the latest version to see if it works for you.

@persistz
Copy link
Author

Hi @huanzhang12 ,

Thanks for your updating and I will check it and close this issue soon.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants