Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No confidence score for the predicted boxes #51

Closed
pordeli opened this issue Jun 3, 2019 · 14 comments
Closed

No confidence score for the predicted boxes #51

pordeli opened this issue Jun 3, 2019 · 14 comments
Labels
question User has doubts about a concept or code.

Comments

@pordeli
Copy link

pordeli commented Jun 3, 2019

I have ground truth boxes and predicted boxes from YOLO and DPM and Openpose algorithms. Their format is [ x y w h] and I do not have the confidence scores. Is it possible to use your python code for getting the precision and recall and the curve? I get the below error:
Metrics-master/lib/BoundingBox.py", line 45, in init
'For bbType='Detection', it is necessary to inform the classConfidence value.')
OSError: For bbType='Detection', it is necessary to inform the classConfidence value.

@rafaelpadilla
Copy link
Owner

Dear @pordeli ,

No. You need the confidence scores to evaluate your detections.

Best regards,
Rafael

@rafaelpadilla rafaelpadilla added the question User has doubts about a concept or code. label Jun 3, 2019
@pordeli
Copy link
Author

pordeli commented Jun 4, 2019

Thanks for your reply. As long as I do not have the scores do you know a kind of code that I can use to get precision and recall and the curve for my 100 frames of the video?

@Ibmaria
Copy link

Ibmaria commented Jun 4, 2019

hello ,
I try to evaluate the model with my own dataset .Can you tell me how to get the box coordinates detected in this format <class_name confidence left top right bottom>
Thanks

@rafaelpadilla
Copy link
Owner

@pordeli ,

Due to the behavior of the metric we apply you will need the confidences. I do not know any other code that you could use.

@rafaelpadilla
Copy link
Owner

@Ibmaria ,

For the detections you need to apply an object detector, such as YOLO, Faster R-CNN, Fast R-CNN, etc. They will provide the detection bounding boxes, classes and confidences.

@Ibmaria
Copy link

Ibmaria commented Jun 4, 2019

@rafaelpadilla

For the model i use ssd mobilenet ,
for evaluation you said that to create 2 folders for ground truth and detection .How did you create detection file in the format class_name, confidence left top right bottom .I can not save them in txt format .How to save them like ground truth.Thanks for advance

@rafaelpadilla
Copy link
Owner

@Ibmaria ,

I never worked with mobilenet, so I dont know its output format. But you should get its output and convert it to the required format.

@Ibmaria
Copy link

Ibmaria commented Jun 5, 2019

@rafaelpadilla
my question is that how to convert it to the required format with the same name 00001.txt

@rafaelpadilla
Copy link
Owner

Dear @Ibmaria ,

You will need to create a script for that. It will take your detections to convert from the format mobilenet provides to the format and name required.

Could you show me the detections format given by mobilenet?

@Ibmaria
Copy link

Ibmaria commented Jun 6, 2019

@rafaelpadilla
can you show me rather your script.I will try to understand and adapt it to mine.Or your email i will send ssen you my code .Thanks in advance

@Ibmaria
Copy link

Ibmaria commented Jun 6, 2019

@rafaelpadilla
Finally i did but I will be delighted to see yours .Thanks

@rafaelpadilla
Copy link
Owner

@Ibmaria ,

I do not have a script for that. If you provide the output format of the mobilenet, I might be able to help you.

@rafaelpadilla
Copy link
Owner

@Ibmaria ,

Was your problem solved?

@Ibmaria
Copy link

Ibmaria commented Jun 14, 2019

@rafaelpadilla
yes i did .Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question User has doubts about a concept or code.
Projects
None yet
Development

No branches or pull requests

3 participants