Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How Yolov5 calculate mAP@.5 and mAP@.5:.95 and plot images? #4052

Closed
Github-Vicente opened this issue Jul 18, 2021 · 3 comments
Closed

How Yolov5 calculate mAP@.5 and mAP@.5:.95 and plot images? #4052

Github-Vicente opened this issue Jul 18, 2021 · 3 comments
Labels
question Further information is requested Stale

Comments

@Github-Vicente
Copy link

Github-Vicente commented Jul 18, 2021

❔Question

I have three question:

  1. When I am using val.py, no matter what the --iou-thres and --conf-thres I set. The test_batch0_pred.jpg will plot the bbox only with bbox conf-thres greater then default value 0.25.
    So if I set the --iou-thres and --conf-thres by myself, it seems that mAP calculation and plot function will under different criteria. Is it normal?
    While the record in txt did follow the --conf-thres I set. But test_batch0_pred.jpg is not.
    Where can I control the threshold of plot iou?

  2. Sometimes val.py output a test_batch0_pred.jpg with mosaic image. Did anyone face this problem too?

  3. Is it normal that I apply val.py and detect.py without any customize parameter to the same dataset, but get different result?

Thank you!

@Github-Vicente Github-Vicente added the question Further information is requested label Jul 18, 2021
@github-actions
Copy link
Contributor

github-actions bot commented Jul 18, 2021

👋 Hello @GamuzaLu, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com.

Requirements

Python>=3.6.0 with all requirements.txt installed including PyTorch>=1.7. To get started:

$ git clone https://github.com/ultralytics/yolov5
$ cd yolov5
$ pip install -r requirements.txt

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

@glenn-jocher
Copy link
Member

glenn-jocher commented Jul 18, 2021

@GamuzaLu 👋 Hello, thank you for asking about the differences between train.py, detect.py and test.py in YOLOv5.

These 3 files are designed for different purposes and utilize different dataloaders with different settings. train.py dataloaders are designed for a speed-accuracy compromise, test.py is designed to obtain the best mAP on a validation dataset, and detect.py is designed for best real-world inference results. A few important aspects of each:

train.py

  • trainloader LoadImagesAndLabels(): designed to load train dataset images and labels. Augmentation capable and enabled.

    yolov5/train.py

    Lines 188 to 192 in fca5e2a

    # Trainloader
    dataloader, dataset = create_dataloader(train_path, imgsz, batch_size, gs, opt,
    hyp=hyp, augment=True, cache=opt.cache_images, rect=opt.rect, rank=rank,
    world_size=opt.world_size, workers=opt.workers,
    image_weights=opt.image_weights, quad=opt.quad, prefix=colorstr('train: '))
  • testloader LoadImagesAndLabels(): designed to load val dataset images and labels. Augmentation capable but disabled.

    yolov5/train.py

    Lines 199 to 202 in fca5e2a

    testloader = create_dataloader(test_path, imgsz_test, batch_size * 2, gs, opt, # testloader
    hyp=hyp, cache=opt.cache_images and not opt.notest, rect=True, rank=-1,
    world_size=opt.world_size, workers=opt.workers,
    pad=0.5, prefix=colorstr('val: '))[0]
  • image size: 640
  • rectangular inference: False
  • confidence threshold: 0.001
  • iou threshold: 0.6
  • multi-label: True
  • padding: None

test.py

  • dataloader LoadImagesAndLabels(): designed to load train, val, test dataset images and labels. Augmentation capable but disabled.

    yolov5/test.py

    Lines 89 to 90 in fca5e2a

    dataloader = create_dataloader(data[task], imgsz, batch_size, gs, opt, pad=0.5, rect=True,
    prefix=colorstr(f'{task}: '))[0]
  • image size: 640
  • rectangular inference: True
  • confidence threshold: 0.001
  • iou threshold: 0.6
  • multi-label: True
  • padding: 0.5 * maximum stride

detect.py

  • dataloaders (multiple): designed for loading multiple types of media (images, videos, globs, directories, streams).

    yolov5/detect.py

    Lines 46 to 53 in fca5e2a

    # Set Dataloader
    vid_path, vid_writer = None, None
    if webcam:
    view_img = check_imshow()
    cudnn.benchmark = True # set True to speed up constant image size inference
    dataset = LoadStreams(source, img_size=imgsz, stride=stride)
    else:
    dataset = LoadImages(source, img_size=imgsz, stride=stride)
  • image size: 640
  • rectangular inference: True
  • confidence threshold: 0.25
  • iou threshold: 0.45
  • multi-label: False
  • padding: None

YOLOv5 PyTorch Hub

models.autoShape() class used for image loading, preprocessing, inference and NMS. For more info see YOLOv5 PyTorch Hub Tutorial

yolov5/models/common.py

Lines 225 to 250 in fca5e2a

class autoShape(nn.Module):
# input-robust model wrapper for passing cv2/np/PIL/torch inputs. Includes preprocessing, inference and NMS
conf = 0.25 # NMS confidence threshold
iou = 0.45 # NMS IoU threshold
classes = None # (optional list) filter by class
def __init__(self, model):
super(autoShape, self).__init__()
self.model = model.eval()
def autoshape(self):
print('autoShape already enabled, skipping... ') # model already converted to model.autoshape()
return self
@torch.no_grad()
@torch.cuda.amp.autocast(torch.cuda.is_available())
def forward(self, imgs, size=640, augment=False, profile=False):
# Inference from various sources. For height=640, width=1280, RGB images example inputs are:
# filename: imgs = 'data/samples/zidane.jpg'
# URI: = 'https://github.com/ultralytics/yolov5/releases/download/v1.0/zidane.jpg'
# OpenCV: = cv2.imread('image.jpg')[:,:,::-1] # HWC BGR to RGB x(640,1280,3)
# PIL: = Image.open('image.jpg') # HWC x(640,1280,3)
# numpy: = np.zeros((640,1280,3)) # HWC
# torch: = torch.zeros(16,3,320,640) # BCHW (scaled to size=640, 0-1 values)
# multiple: = [Image.open('image1.jpg'), Image.open('image2.jpg'), ...] # list of images

@github-actions
Copy link
Contributor

github-actions bot commented Aug 18, 2021

👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.

Access additional YOLOv5 🚀 resources:

Access additional Ultralytics ⚡ resources:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale
Projects
None yet
Development

No branches or pull requests

2 participants