Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Visualize model predictions (get scores and boxes) #2

Closed
kschwethelm opened this issue Dec 2, 2020 · 5 comments
Closed

Visualize model predictions (get scores and boxes) #2

kschwethelm opened this issue Dec 2, 2020 · 5 comments

Comments

@kschwethelm
Copy link

Hello,

thank you for your great work!

I want to test the performance of my network on some test images. For this I visualize the predicted boxes and scores on the images. I got everything working as I can use my code from the original DETR, but I was wondering how to get the correct scores and labels.

For DETR I did:

# keep only predictions with 0.7+ confidence
probas = outputs['pred_logits'].softmax(-1)[0, :, :-1]
keep = probas.max(-1).values > 0.7

# convert boxes from [0; 1] to image scales
bboxes_scaled = rescale_bboxes(outputs['pred_boxes'][0, keep], im.size)

scores, boxes = probas[keep], bboxes_scaled

Since you use sigmoid function for DeformableDETR I replaced these lines with:
(Heavily inspired by PostProcess class from deformable_detr.py 😄)

 prob = out_logits.sigmoid() 
 topk_values, topk_indexes = torch.topk(prob.view(out_logits.shape[0], -1), 100, dim=1) 
 scores = topk_values 
 topk_boxes = topk_indexes // out_logits.shape[2] 
 labels = topk_indexes % out_logits.shape[2] 
 boxes = box_ops.box_cxcywh_to_xyxy(out_bbox) 
 boxes = torch.gather(boxes, 1, topk_boxes.unsqueeze(-1).repeat(1,1,4)) 
  
 # and from relative [0, 1] to absolute [0, height] coordinates 
 img_h, img_w = im.size
 img_w = torch.tensor(img_w, device=boxes.device)
 img_h = torch.tensor(img_h, device=boxes.device)
 scale_fct = torch.unsqueeze(torch.stack([img_w, img_h, img_w, img_h], 0))
 boxes = boxes * scale_fct[:, None, :]

With this I get a lot of false positives. The scores are pretty low compared to softmax scores, so which threshold would you recommend to get rid of the false positives?

@jackroos
Copy link
Member

jackroos commented Dec 3, 2020

Hi @krxxxxxxxanc ,

You can add following code in the end to filter out false positives with low confidence:

keep = scores[0] > threshold
boxes = boxes[0, keep]
labels = labels[0, keep]

You could simply set threshold=0.5. But it is better to tune the threshold manually according to your test images.

@kschwethelm
Copy link
Author

Thank you!

When I calculate mAP for my dataset and set a score threshold there, I always get a worse result than without a threshold. But in my box visualization I can clearly see a lot of false positives. Is this a flaw in the mAP metric?

I asked the same question for the original DETR, where its more extrem.

facebookresearch/detr#293

@kschwethelm
Copy link
Author

Alright I think I get it now. In the COCO AP definition you calculate how the N (= 100 in this case) best predictions (highest scores) perform. Because of ranking them in descending order, False Positives with low confidence don't have a lot of impact on the AP. In a real use case you wouldn't use these 100 predictions, but set a score threshold.

If you agree with me, you can close this issue 👍

@jackroos
Copy link
Member

jackroos commented Dec 3, 2020

Yes. I agree with you. The threshold at least can't result in a positive impact on AP and may filter out "true positive" as well.

@GivanTsai
Copy link

Hi @krxxxxxxxanc ,

You can add following code in the end to filter out false positives with low confidence:

keep = scores[0] > threshold
boxes = boxes[0, keep]
labels = labels[0, keep]

You could simply set threshold=0.5. But it is better to tune the threshold manually according to your test images.

Is there information leakeay when tuning threshold based on test images?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants