Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Find some weird code in eval.py #17

Closed
murdockhou opened this issue Oct 23, 2018 · 3 comments
Closed

Find some weird code in eval.py #17

murdockhou opened this issue Oct 23, 2018 · 3 comments

Comments

@murdockhou
Copy link

Hi, thanks for your work. When I looking your code about this repository , i find somewhere is weird in eval.py. When to get predicated bbox_keypoints, you used the true keypoints to assign the bbox_keypoints. The code in eval.py is about line 200 and line 205.

The peaks is true keypoints coordinate, is it right? It seems that used the true coordinate to assign the predicated bbox_keypoints. Actually i think the line 209~220 in eval.py is the right way to get real predicated bbox_keypoints.

May be you can give me some advice about this, thanks.

@murdockhou
Copy link
Author

is there anyone who can explain it for me?
thanks~

@jackyjsy
Copy link

Same issue here. It seems to me that the evaluation part uses the ground-truth keypoints to inference the estimated keypoints. @salihkaragoz Could you explain our concerns in you evaluation code? Thanks!

@salihkaragoz
Copy link
Owner

Hi,
We are not using the PRN for keypoint prediction but to decide on which keypoints are associated with a bounding box. So the output of PRN is there for to score each keypoint not to finetune the input keypoints. Based on these scores we run a greedy assignment method to obtain final matchings.
Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants