Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Which recall did you report in the paper? #6

Closed
qsisi opened this issue Feb 11, 2022 · 2 comments
Closed

Which recall did you report in the paper? #6

qsisi opened this issue Feb 11, 2022 · 2 comments

Comments

@qsisi
Copy link

qsisi commented Feb 11, 2022

Hello! Thanks for open-sourcing this great work, here I got a question about the registration recall reported in the article.
From the code :

recall1 = success1/len(self.loader['test'].dataset)

It seems that you calculate the pair-level recall instead of the scene-level recall, but in the methods you compared with such as D3Feat or PREDATOR, the metric recall is calculated in scene-level but not in pair-level.

Could you give some hints about it?

@rabbityl
Copy link
Owner

rabbityl commented Feb 13, 2022

@qsisi That's indeed a very good point which I did not noticed previously. Yes the pair-level RR is used in our Repo. I re-evaluated Predator's model using the pair-level RR it got: 91.7/62.5 on 3DMatch/3DLoMatch.( i.e. 1.1% increased in 3DMatch). We will update the paper and clarify this.

@rabbityl
Copy link
Owner

All metrics are updated as pair-level. closing this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants