Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

match #3

Closed
Horzion0314 opened this issue Jun 17, 2024 · 6 comments
Closed

match #3

Horzion0314 opened this issue Jun 17, 2024 · 6 comments

Comments

@Horzion0314
Copy link

Firstly, thank you for your work. I have a question: the paper introduces several methods for matching. Why were methods like LightGlue and SuperGlue not used in the experiments?

@Horzion0314
Copy link
Author

I wonder if it's possible

@georg-bn
Copy link
Owner

georg-bn commented Jun 19, 2024

It would be possible, but the benefits of using steerers diminishes with more costly matching methods. Still, you could use the lightglue for dedode that is in kornia and steer the descriptions with our steerers from "setting A".

One could also retrain lightglue for our descriptors from settings B and C. I think interesting future work would be creating a learnt matcher like lightglue that is aware of the steerer and perhaps uses it in the intermediate layers.

@Horzion0314
Copy link
Author

thank you so much

@guipotje
Copy link

It would be possible, but the benefits of using steerers diminishes with more costly matching methods. Still, you could use the lightglue for dedode that is in kornia and steer the descriptions with our steerers from "setting A".

One could also retrain lightglue for our descriptors from settings B and C. I think interesting future work would be creating a learnt matcher like lightglue that is aware of the steerer and perhaps uses it in the intermediate layers.

Hi @georg-bn, I was wondering if it would be possible to apply a steerer right before the final correlation matrix in LightGlue. Intuitively, the token embeddings before the final correlation are no different from an embedding from a local descriptor. By training a projection matrix on these final token embeddings, it should become rotation-equivariant, if I understood correctly?

@georg-bn
Copy link
Owner

Something like that would be very nice, I have not thought about that! Intuitively I would guess that lightglue gets confused in early layers if we input rotated images, so it might destroy the descriptions in some sense. I.e. the lightglue net tries to reason about the similarity of descriptions, but this gets very tricky if the images are rotated and the descriptions hence not similar. Perhaps something like you suggest could nonetheless be possible, it's an interesting idea for sure!

@guipotje
Copy link

It makes sense if Lightglue gets confused; nevertheless, it's worth a try to see the outcome. Thanks for replying!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants