Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improving the performance on in-plane rotations #59

Closed
ufukefe opened this issue Feb 5, 2021 · 3 comments
Closed

Improving the performance on in-plane rotations #59

ufukefe opened this issue Feb 5, 2021 · 3 comments

Comments

@ufukefe
Copy link

ufukefe commented Feb 5, 2021

Hi, thanks for your excellent work and for sharing it!

I have spent some time testing the SuperGlue algorithm on my dataset. I have observed that if there is some amount of in-plane rotations, the algorithm can't find suitable matches. I think it is mostly about SuperPoint baseline. I am planning to retrain the SuperPoint network arranging homography adaptation, including extreme in-plane rotations. I want to ask three questions with your permission.

  1. Do you have any other suggestions other than arranging homography adaptation parameters to improve the performance for in-plane rotations?

  2. Do you think I also need to retrain the SuperGlue network using the new in-plane rotation invariant SuperPoint network that I plan to train?

  3. Also, I wonder that are the models for SuperGlue and SuperPoint that you shared the end-to-end trained versions or the separately training ones?

@sarlinpe
Copy link
Contributor

sarlinpe commented Feb 5, 2021

The SuperPoint descriptors are invariant to rotation up to about 45°. In his ECCV 2020 paper Online Invariance Selection for Local Feature Descriptors, @rpautrat shows that there is a clear trade-off between invariance and discriminativeness: a descriptor that is trained to be rotation invariant is naturally less illumination invariant, especially at fixed network capacity.

If you don't mind about compute and storage, a quick fix is to rotate the image by multiples of 90°, extract features for the 4 resulting images, match each of them against your other images, and keep the set of matches with the largest number of RANSAC inliers.

Regarding your other questions:
2. If you retrain SuperPoint from scratch, then yes you would achieve the best performance by retraining SuperGlue too, but it might still work well without it.
3. The released weights correspond to models trained independently.

@ufukefe
Copy link
Author

ufukefe commented Feb 5, 2021

Thank you very much for your answers, I will try your advice.

@kyuhyoung
Copy link

I tried the advice here. https://github.com/kyuhyoung/SuperGluePretrainedNetwork?ts=2

Yu-AnChen added a commit to Yu-AnChen/dump that referenced this issue Dec 15, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants