Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can I get reference results without gt? #5

Closed
MiracleHW opened this issue Jul 24, 2019 · 4 comments
Closed

How can I get reference results without gt? #5

MiracleHW opened this issue Jul 24, 2019 · 4 comments
Labels
bug Something isn't working

Comments

@MiracleHW
Copy link

MiracleHW commented Jul 24, 2019

I download the wireframe dataset,and run
dataset/wireframe.py
process.py
post.py
by order , and get the result.

I want to test some image without gt , so I modified some code in dataset/wireframe.py , but result is very strange.
Here is the code I modified
image
And here is the visual result with modified code.
image
Here is the visual result with source code
image

Seems like the gt will influence test result, how can I remove the gt influence in reference, and how can I test some image without gt?

@zhou13
Copy link
Owner

zhou13 commented Jul 24, 2019

Hi Wei Hu,

Thank you so much for testing our code. Your issue just scares the hell out of me... I can reproduce your problem and it is because previously we use the 2x numbers of the ground truth junctions as the number of candidate junctions. This should not be used in the evaluation mode and it is fixed now in the master by changing the number of candidate junctions to a hardcoded constant during testing.

I don't think this change will affect the PR curve and other metrics much since it only increases the number of candidates lines rather than changing their score/ranking. But I will rerun the evaluation code and include a script for the ease of testing on a new image without the hassle of data processing, hopefully in this weekend. Again thank you for finding the problem.

Best,
Yichao Zhou.

@zhou13 zhou13 closed this as completed in 8543ec5 Jul 24, 2019
@zhou13 zhou13 reopened this Jul 24, 2019
@MiracleHW
Copy link
Author

MiracleHW commented Jul 25, 2019

Thanks for your reply , I test you updated code , the result seems like have more messy lines than before. Updated code result
image
Previous results
image
Maybe the number of candidate junctions is set too large

@zhou13 zhou13 added the bug Something isn't working label Jul 31, 2019
@zhou13
Copy link
Owner

zhou13 commented Jul 31, 2019

I will update the thresholding scheme as the current one is arbitrary. (it should be thresholded by the confidence rather than a const count). The visualization threshold might also be affected according to your figures.

@zhou13
Copy link
Owner

zhou13 commented Aug 7, 2019

I implemented the new thresholding strategy. The performance metric in 4c74116 (using N_gt * 2 junctions) is:

msAP 5/10/15: 58.9 62.9 64.7
APH: 83.0
FH: 81.6

and the performance metric now (thresholding on the junction map) is

msAP 5/10/15: 58.9 62.9 64.7
APH: 82.8
FH: 81.2

I will consider this as fixed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants