Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation results are different from the paper #4

Closed
Kanghee-Lee opened this issue Nov 11, 2020 · 2 comments
Closed

Evaluation results are different from the paper #4

Kanghee-Lee opened this issue Nov 11, 2020 · 2 comments

Comments

@Kanghee-Lee
Copy link

Kanghee-Lee commented Nov 11, 2020

Hi JuanDuGit,

I run the test code to compute the recall but it’s recall performance is different from the paper.

According to the DH3D paper, recall @1 and @1% are 74.16, 85.30.

However, when I ran the globaldesc_extract.py with your pertained model which is in model/global/xxx ,
I got the following results:

Avg_recall :
1 : 0.7532
2 : 0.8284
3 : 0.8624

Avg_one_percent_retrieved :
0.8849

I would very much appreciate it if you could give me an explanation about these results. Thank you.

Best,
Ganghee

@JuanDuGit
Copy link
Owner

Hi Ganghee,

the results can be a little different due to two types of randomness existed in our method. The first happens during the input preprecossing, when a fixed number (8192) of points are selected for each pointcloud. Note that the pointclouds in our test set are not guaranteed to contain the same number of points for different scenes so that other methods are free to use a suitable size for their models.

The other type of randomness is caused by the subsampling step involved in the dilated convolution (https://github.com/JuanDuGit/DH3D/blob/master/core/backbones.py#L64) when extracting the local descriptor. The subsampling step is a common technique to let the descriptor have a larger receptive field, which we have found useful to make both local and global descriptor more informative.

We have ran the evaluation multiple times and chose the results conservatively to be reported in the paper. From our experience, you are expected to see a 1~3 % increase than our reported results when running the evaluation yourself.

I hope this carifies your question.

Best,
Juan

@Kanghee-Lee
Copy link
Author

Hi JuanDuGit,

Thanks for your explanation!

Best,
Ganghee

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants