You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
the results can be a little different due to two types of randomness existed in our method. The first happens during the input preprecossing, when a fixed number (8192) of points are selected for each pointcloud. Note that the pointclouds in our test set are not guaranteed to contain the same number of points for different scenes so that other methods are free to use a suitable size for their models.
The other type of randomness is caused by the subsampling step involved in the dilated convolution (https://github.com/JuanDuGit/DH3D/blob/master/core/backbones.py#L64) when extracting the local descriptor. The subsampling step is a common technique to let the descriptor have a larger receptive field, which we have found useful to make both local and global descriptor more informative.
We have ran the evaluation multiple times and chose the results conservatively to be reported in the paper. From our experience, you are expected to see a 1~3 % increase than our reported results when running the evaluation yourself.
Hi JuanDuGit,
I run the test code to compute the recall but it’s recall performance is different from the paper.
According to the DH3D paper, recall @1 and @1% are 74.16, 85.30.
However, when I ran the globaldesc_extract.py with your pertained model which is in model/global/xxx ,
I got the following results:
Avg_recall :
1 : 0.7532
2 : 0.8284
3 : 0.8624
…
Avg_one_percent_retrieved :
0.8849
I would very much appreciate it if you could give me an explanation about these results. Thank you.
Best,
Ganghee
The text was updated successfully, but these errors were encountered: