Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmark on HPatches dataset #15

Closed
sunshantong opened this issue Jun 1, 2021 · 0 comments
Closed

Benchmark on HPatches dataset #15

sunshantong opened this issue Jun 1, 2021 · 0 comments

Comments

@sunshantong
Copy link

Hi, @zjhthu Thanks for your great works!
I used your pretrained model to evaluate on the HPatches dataset, but the results are much worse than reported in the paper. The evaluation results of the pretrained model are as follows:

----------i_eval_stats----------
avg_n_feat 4492
avg_rep 0.5117718
avg_precision 0.5868967
avg_matching_score 0.310993
avg_recall 0.56954247
avg_MMA 0.58678484
avg_homography_accuracy 0.8846155
----------v_eval_stats----------
avg_n_feat 4967
avg_rep 0.49724704
avg_precision 0.53153855
avg_matching_score 0.23847558
avg_recall 0.45807076
avg_MMA 0.48269215
avg_homography_accuracy 0.46428576
----------all_eval_stats----------
avg_n_feat 4738
avg_rep 0.5042403
avg_precision 0.55819243
avg_matching_score 0.2733914
avg_recall 0.5117423
avg_MMA 0.53281087
avg_homography_accuracy 0.6666667

The model trained by myself and post-CVPR update model are also very poor. Do I need to adjust certain parameters? Or are there any problems with the evaluation scripts? Looking forward to your suggestions.
Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant