Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pretrained models for ResNet in SO(2) and SE(3) #73

Open
olayasturias opened this issue Aug 15, 2023 · 4 comments
Open

Pretrained models for ResNet in SO(2) and SE(3) #73

olayasturias opened this issue Aug 15, 2023 · 4 comments

Comments

@olayasturias
Copy link

Hello all,

I wonder if anyone has a pretrained model available for the equivariant ResNet from this example and the SE(3) equivariant model in this other example, using a bigger dataset like Imagenet or similar.

Thanks!

@sibasmarak
Copy link

sibasmarak commented Feb 14, 2024

Hi, there are existing methods which demonstrate that one does not need to retrain an equivariant version of ResNet (or other large pretrained models) to obtain pretrained equivariant ResNet, rather you can "adapt" a pretrained ResNet to be equivariant to a certain group with architecture-agnostic equivariance methods, such as canonicalization.

Please feel free to check out: Equivariant Adaptation of Large Pretrained Model, NeurIPS 2023. Although it shows great results for discrete groups in the image domain and continuous groups in point clouds and other tasks, there are a few challenges in adapting pretrained image models for continuous groups, which is a work in progress.

@olayasturias
Copy link
Author

Hi Siba,
Thank you for your answer. That's a fascinating work!
From what I understood, instead of training an equivariant Resnet from scratch, you preceded a ResNet50 with your canonicalization network and then fine-tuned the ResNet while training that canonicalization module. Is that correct?
Do you have code examples of how did you make that work? I'm particularly interested in the network you implemented with the escnn library.
Is it similar to the CNN under this notebook? How many layers - and in general, which hyperparameters- suited you well?

@sibasmarak
Copy link

Hi, thank you for taking a look at the paper!
Yes, indeed. Note that you don't need to fine-tune per se (as we show in the case of the Segment-Anything Model, you can only train the equivariant canonicalization network to learn the identity orientation with prior regularization), and you would need a regularization loss to align the outputs of the canonicalization network and (pre-trained) dataset orientation.

We are planning a release of our user-friendly library before the end of February. We are adding examples and tutorials for people to get started with canonicalization. I will let you know once we release the library. A schematic of the pipeline is described in Figure 2.

Yes, the canonicalization networks are similar to the notebook you have linked. We give a small detail of hyperparameter tuning in Appendix Section B. We tune for different values of the number of layers, kernel sizes, dropout (switching off dropout generally helped), and learning rates. Anyways, the canonicalization networks are extensively small compared to the actual pretrained model under consideration, which makes it lucrative (some parameter sizes are highlighted in Table 3).

@dmklee
Copy link

dmklee commented Apr 4, 2024

I pretrained some equivariant ResNets on ImageNet-1k. The models and weights can be found here.

The canonicalization approach is appealing since it can be applied to any pre-trained method. I haven't had a chance to compare against it yet, but I'm curious if there is any performance gap.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants