Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About test evaluation content #177

Closed
FujiwaraZayako opened this issue May 3, 2022 · 10 comments
Closed

About test evaluation content #177

FujiwaraZayako opened this issue May 3, 2022 · 10 comments

Comments

@FujiwaraZayako
Copy link

In your paper, the evaluation comparison demonstrated includes the accuracy of the prediction. But I opened loftr_ pred_ eval. NPY found that the only content in it was' pair name '', identifier '', mconf '', mkpts1f '', mkpts0f '', EPI '', R '', tT'', inliers'。
How to show the prediction accurac?
P3%S O(4OWH2ZLUZ$@5(M
EC)O7(P$N~BPQ PQ(IILJDD
y

@zehongs
Copy link
Member

zehongs commented May 3, 2022

Please try the test script?

@FujiwaraZayako
Copy link
Author

Please try the test script?
The result of my test is this, and there is no prediction.
$TDRDH5UP{$1 B9XAP37 12

After viewing the source code in plotting, I try to get the number of "epi_err" in the "loftr_pred_eval" data that is less than the threshold.Is that right?
IP}1WV22%HL F`YK911FP49

@FujiwaraZayako
Copy link
Author

Now I have new questions about the accuracy of matching point pairs.
9ZU B1 2KLTI4M}ZU_S31_I
“demo_single_pair”, the color of match lines is based on mconf. Mconf is the tensor value of 0-1,So when mconf is greater than or less than what value will it be judged that the matching image block is correct
9(YPH5EX}9S@6M0M}YNYE0K

@zehongs
Copy link
Member

zehongs commented May 5, 2022

Hi, how do you define "the accuracy of the prediction"? Do you mean error AUC of all test pairs? Or matching precision for a single pair?

@FujiwaraZayako
Copy link
Author

matching precision

@zehongs
Copy link
Member

zehongs commented May 5, 2022

“demo_single_pair”, the color of match lines is based on mconf. Mconf is the tensor value of 0-1,So when mconf is greater than or less than what value will it be judged that the matching image block is correct


def make_matching_figure(

We're not changing colors in this function.

@zehongs
Copy link
Member

zehongs commented May 5, 2022

After viewing the source code in plotting, I try to get the number of "epi_err" in the "loftr_pred_eval" data that is less than the threshold.Is that right?

Yes.

@FujiwaraZayako
Copy link
Author

def make_matching_figure(

We're not changing colors in this function.
This part of the function is to calculate the error by reading the npz information of the corresponding numbered data set in the process of training and testing, and then draw the graph. I saw the corresponding information in the process of reading tensorboard, but when testing other data, I can't calculate the information of accuracy.
EA8RBXLSF81_JK6X }HJ8

I mean, when testing with other images instead of megadepth dataset images, can we calculate the key error information such as "epierr", "ter", "rerr" and use it to predict the image block matching accuracy index.?If not, can the confidence of "cm. Jet" used to distinguish colors be used as an indicator to calculate the matching accuracy.

@zehongs
Copy link
Member

zehongs commented May 6, 2022

Yes, when the K,R and t of image pairs are provided.
cm.jet is a color map converter, I don't think you can use it to calculate matching accuracy. And I also want to point out that the epipolar error can only give you a rough estimate of matching accuracy, as there are still faulty matches with low epipolar error. That's one of the reason that we don't use this metric in our paper.

Are you looking for this?

def compute_symmetrical_epipolar_errors(data):

def symmetric_epipolar_distance(pts0, pts1, E, K0, K1):

@FujiwaraZayako
Copy link
Author

Are you looking for this?

def compute_symmetrical_epipolar_errors(data):

def symmetric_epipolar_distance(pts0, pts1, E, K0, K1):

I have read this part and come to the conclusion that such evaluation indicators cannot be calculated on other data. But I'm not sure if I'm right, so I ask you for help.
Thank you for your generous answer

@zehongs zehongs closed this as completed May 6, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants