Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

data about IoU #1401

Closed
wanterlim opened this issue Nov 15, 2020 · 19 comments
Closed

data about IoU #1401

wanterlim opened this issue Nov 15, 2020 · 19 comments
Labels
question Further information is requested Stale

Comments

@wanterlim
Copy link

❔Question

I'm currently working on object detection using yolov. I did training on my model, but i can't find the IoU. Is it the Box val that replaces the the IoU now?
Here's my training results
results
.

Additional context

@wanterlim wanterlim added the question Further information is requested label Nov 15, 2020
@github-actions
Copy link
Contributor

github-actions bot commented Nov 15, 2020

Hello @wanterlim, thank you for your interest in our work! Please visit our Custom Training Tutorial to get started, and see our Jupyter Notebook Open In Colab, Docker Image, and Google Cloud Quickstart Guide for example environments.

If this is a bug report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom model or data training question, please note Ultralytics does not provide free personal support. As a leader in vision ML and AI, we do offer professional consulting, from simple expert advice up to delivery of fully customized, end-to-end production solutions for our clients, such as:

  • Cloud-based AI systems operating on hundreds of HD video streams in realtime.
  • Edge AI integrated into custom iOS and Android apps for realtime 30 FPS video inference.
  • Custom data training, hyperparameter evolution, and model exportation to any destination.

For more information please visit https://www.ultralytics.com.

@glenn-jocher
Copy link
Member

Regression loss is the first column. Regression metric is CIoU.

@wanterlim
Copy link
Author

image
So, is this the CIoU that you mentioned?

@glenn-jocher
Copy link
Member

This is CIoU loss.

@wanterlim
Copy link
Author

So, where can i find the Regression metric? Sorry, I'm still confused.

@glenn-jocher
Copy link
Member

utils/loss.py compute_loss()

@wanterlim
Copy link
Author

image
Thank you for reply, i got this error when i run the code in Colab, can you give me a right code to compute IoU?

@glenn-jocher
Copy link
Member

@wanterlim loss.py is not a runnable file, it has functions used for loss computation. The function that computes CIoU between pairs of boxes is here:

yolov5/utils/general.py

Lines 188 to 231 in 2026d4c

def bbox_iou(box1, box2, x1y1x2y2=True, GIoU=False, DIoU=False, CIoU=False, eps=1e-9):
# Returns the IoU of box1 to box2. box1 is 4, box2 is nx4
box2 = box2.T
# Get the coordinates of bounding boxes
if x1y1x2y2: # x1, y1, x2, y2 = box1
b1_x1, b1_y1, b1_x2, b1_y2 = box1[0], box1[1], box1[2], box1[3]
b2_x1, b2_y1, b2_x2, b2_y2 = box2[0], box2[1], box2[2], box2[3]
else: # transform from xywh to xyxy
b1_x1, b1_x2 = box1[0] - box1[2] / 2, box1[0] + box1[2] / 2
b1_y1, b1_y2 = box1[1] - box1[3] / 2, box1[1] + box1[3] / 2
b2_x1, b2_x2 = box2[0] - box2[2] / 2, box2[0] + box2[2] / 2
b2_y1, b2_y2 = box2[1] - box2[3] / 2, box2[1] + box2[3] / 2
# Intersection area
inter = (torch.min(b1_x2, b2_x2) - torch.max(b1_x1, b2_x1)).clamp(0) * \
(torch.min(b1_y2, b2_y2) - torch.max(b1_y1, b2_y1)).clamp(0)
# Union Area
w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps
w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps
union = w1 * h1 + w2 * h2 - inter + eps
iou = inter / union
if GIoU or DIoU or CIoU:
cw = torch.max(b1_x2, b2_x2) - torch.min(b1_x1, b2_x1) # convex (smallest enclosing box) width
ch = torch.max(b1_y2, b2_y2) - torch.min(b1_y1, b2_y1) # convex height
if CIoU or DIoU: # Distance or Complete IoU https://arxiv.org/abs/1911.08287v1
c2 = cw ** 2 + ch ** 2 + eps # convex diagonal squared
rho2 = ((b2_x1 + b2_x2 - b1_x1 - b1_x2) ** 2 +
(b2_y1 + b2_y2 - b1_y1 - b1_y2) ** 2) / 4 # center distance squared
if DIoU:
return iou - rho2 / c2 # DIoU
elif CIoU: # https://github.com/Zzh-tju/DIoU-SSD-pytorch/blob/master/utils/box/box_utils.py#L47
v = (4 / math.pi ** 2) * torch.pow(torch.atan(w2 / h2) - torch.atan(w1 / h1), 2)
with torch.no_grad():
alpha = v / ((1 + eps) - iou + v)
return iou - (rho2 / c2 + v * alpha) # CIoU
else: # GIoU https://arxiv.org/pdf/1902.09630.pdf
c_area = cw * ch + eps # convex area
return iou - (c_area - union) / c_area # GIoU
else:
return iou # IoU

@wanterlim
Copy link
Author

Can you give some code to save or dispay my IoU? Do i need to retrain my dataset or using my best_weights? I really apreciate your helps

@github-actions
Copy link
Contributor

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@ashu2496
Copy link

Can you give some code to save or dispay my IoU? Do i need to retrain my dataset or using my best_weights? I really apreciate your helps

If you find work around, would you post it? i am also finding how to get IOU

@JakobStadlhuber
Copy link

Is it not possible to get the IoU metric somehow?

@glenn-jocher
Copy link
Member

@JakobStadlhuber I don't know what you're asking

@JakobStadlhuber
Copy link

JakobStadlhuber commented Feb 1, 2022

I mean we get as a result of the training process for example the mAP@0.5 or the recall... but it would be release helpful to get the IoU to compare it to results of for example papers.
Do you know how to get this metric for trained models @glenn-jocher ?

@glenn-jocher
Copy link
Member

glenn-jocher commented Feb 1, 2022

@JakobStadlhuber IoU is not an output, IoU threshold is an input, i.e.:

python val.py --iou_thres 0.6

See val.py argparser for details:

yolov5/val.py

Lines 318 to 347 in 6445a81

def parse_opt():
parser = argparse.ArgumentParser()
parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='dataset.yaml path')
parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov5s.pt', help='model.pt path(s)')
parser.add_argument('--batch-size', type=int, default=32, help='batch size')
parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=640, help='inference size (pixels)')
parser.add_argument('--conf-thres', type=float, default=0.001, help='confidence threshold')
parser.add_argument('--iou-thres', type=float, default=0.6, help='NMS IoU threshold')
parser.add_argument('--task', default='val', help='train, val, test, speed or study')
parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
parser.add_argument('--workers', type=int, default=8, help='max dataloader workers (per RANK in DDP mode)')
parser.add_argument('--single-cls', action='store_true', help='treat as single-class dataset')
parser.add_argument('--augment', action='store_true', help='augmented inference')
parser.add_argument('--verbose', action='store_true', help='report mAP by class')
parser.add_argument('--save-txt', action='store_true', help='save results to *.txt')
parser.add_argument('--save-hybrid', action='store_true', help='save label+prediction hybrid results to *.txt')
parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels')
parser.add_argument('--save-json', action='store_true', help='save a COCO-JSON results file')
parser.add_argument('--project', default=ROOT / 'runs/val', help='save to project/name')
parser.add_argument('--name', default='exp', help='save to project/name')
parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
parser.add_argument('--half', action='store_true', help='use FP16 half-precision inference')
parser.add_argument('--dnn', action='store_true', help='use OpenCV DNN for ONNX inference')
opt = parser.parse_args()
opt.data = check_yaml(opt.data) # check YAML
opt.save_json |= opt.data.endswith('coco.yaml')
opt.save_txt |= opt.save_hybrid
print_args(FILE.stem, opt)
return opt

@JakobStadlhuber
Copy link

JakobStadlhuber commented Feb 1, 2022

Okay but is't it also a metric like mentioned in this paper:

Bildschirmfoto 2022-02-02 um 00 11 36

I would like to compare this numbers with the results from the training, the problem is that they don't provide a mAP, so I thought I try to get a IoU metric value over a couple of test pictures.

@Yash-chowdary
Copy link

I am also trying to know this. Have you got any clarity about Iou mentioned in papers.? @JakobStadlhuber

@CatB1t
Copy link

CatB1t commented May 26, 2022

If i understand your question right, I tried to implement a simple script to show IoU score on each bounding box for my test data, here's the code.

the output is very similar to original val.py output but extended with IOU score for each bounding box. I didn't test on it large data, it may be very slow or something may break. Here's an example of what the output looks like:

output

@silversurfer11
Copy link

If i understand your question right, I tried to implement a simple script to show IoU score on each bounding box for my test data, here's the code.

the output is very similar to original val.py output but extended with IOU score for each bounding box. I didn't test on it large data, it may be very slow or something may break. Here's an example of what the output looks like:

output

@CatB1t could you confirm a few things about your code which you linked to calculate the IoU of predicted and ground-truth bounding boxes?

  1. Images we want to use have to be stored in a folder like '.../datset_name/images/test' which is to be specified in dataset.yaml file. And the corresponding labels must be stored in '.../datset_name/labels/test'.
  2. for r in result.xywh[0]: iou = float(max(bbox_iou(r[None, :4], truth[:, 1:] * imgsz))) In this step on line 80, you are calculating the IoU of each of the predicted bounding box (assuming result.xywh[0] gives a list of predicted bounding boxes) with all the ground-truth bounding boxes in label.txt one by one and selecting the max IoU as the correct one for an object.
    Is my understanding correct? Thanks in advance

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale
Projects
None yet
Development

No branches or pull requests

7 participants