Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

evaluate #3

Open
ccxietao opened this issue Jan 4, 2019 · 4 comments
Open

evaluate #3

ccxietao opened this issue Jan 4, 2019 · 4 comments

Comments

@ccxietao
Copy link

ccxietao commented Jan 4, 2019

hello,how to use map to evaluate this model

@aby2s
Copy link
Owner

aby2s commented Jan 4, 2019

Hi.
Sorry, I don't exactly understand what you're asking for. It's possible to either evaluate model on COCO validation set or to obtain mask for any image. It's all described in the readme.

@ccxietao
Copy link
Author

ccxietao commented Jan 4, 2019

Hello, I want to know how to use the evaluation index AP of the coco dataset to evaluate the quality of the model. You use the IOU evaluation index in your code, so how should I modify it?

@aby2s
Copy link
Owner

aby2s commented Jan 4, 2019

The best way to do it is to store predictions in COCO format and use official scripts for evaluation. Unfortunately, I didn't implement it, but it's not hard to do.
You can modify the _run_validation method or add a similar one, which calculates predictions (self.score_predictions/self.refinement_prediction for SharpMask) in the session run instead of metric calculation. Then store predictions in COCO format and use COCO evaluation tools to calculate AP - https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocotools/cocoeval.py

@ccxietao
Copy link
Author

ccxietao commented Jan 4, 2019

Ok, I will try it, thank you very much.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants