New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About test evaluation content #177
Comments
Please try the test script? |
Hi, how do you define "the accuracy of the prediction"? Do you mean error AUC of all test pairs? Or matching precision for a single pair? |
matching precision |
“demo_single_pair”, the color of match lines is based on mconf. Mconf is the tensor value of 0-1,So when mconf is greater than or less than what value will it be judged that the matching image block is correct Line 20 in 2122156
We're not changing colors in this function. |
Yes. |
I mean, when testing with other images instead of megadepth dataset images, can we calculate the key error information such as "epierr", "ter", "rerr" and use it to predict the image block matching accuracy index.?If not, can the confidence of "cm. Jet" used to distinguish colors be used as an indicator to calculate the matching accuracy. |
Yes, when the K,R and t of image pairs are provided. Are you looking for this? Line 50 in 2122156
Line 30 in 2122156
|
I have read this part and come to the conclusion that such evaluation indicators cannot be calculated on other data. But I'm not sure if I'm right, so I ask you for help. |
In your paper, the evaluation comparison demonstrated includes the accuracy of the prediction. But I opened loftr_ pred_ eval. NPY found that the only content in it was' pair name '', identifier '', mconf '', mkpts1f '', mkpts0f '', EPI '', R '', tT'', inliers'。
How to show the prediction accurac?
y
The text was updated successfully, but these errors were encountered: