Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about the understanding of sinkhorn operations #5

Closed
qsisi opened this issue May 23, 2022 · 1 comment
Closed

Questions about the understanding of sinkhorn operations #5

qsisi opened this issue May 23, 2022 · 1 comment

Comments

@qsisi
Copy link

qsisi commented May 23, 2022

Hello! Thanks for open-sourcing this amazing work. Here I got a question about the sinkhorn operation utilized in the paper.

I noticed that there are two versions of sinkhorn operations used in the paper. One is the version from SuperGlue, the other is from the RPMNet. The first version is used to process the soft assignment of coarse-level matching, and the second is for the fine-level matching.

The two implementations actually differs a lot in implementations, the most obvious one is the choice of padding, one is to pad the score matrix with learnable parameters, while the other chooses the fixed zero scalar.

Could you give some hints about the behind reasons for the two sinkhorn choices when you designed the network? Or what is your understanding of these two implementations of the sinkhorn operations above? That would be so helpful.

Thank you very much for your help.

@haoyu94
Copy link
Owner

haoyu94 commented Jul 28, 2022

Hi, as we observed, the implementation from SuperGlue also works better than the one we used on the fine level. It has been updated in GeoTransformer (https://github.com/qinzheng93/GeoTransformer).

@haoyu94 haoyu94 closed this as completed Nov 30, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants