Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why the results of the detect script are not the same as the results of the val script? #13084

Open
1 task done
ThreeStones1029 opened this issue Jun 12, 2024 · 2 comments
Labels
question Further information is requested

Comments

@ThreeStones1029
Copy link

Search before asking

Question

Why the results of the two scripts detect and val are not the same, and much worse. The reproduction process:

  1. I used the same model and validation set.
  2. I set the same confidence and iou and the maximum number of detections.
  3. I ran the val script and saved the json result and converted it to a format suitable for coco evaluation.
  4. I save the results of the detect script as a separate json file.
    But when I used coco's api to evaluate these two results, the map was far apart?
    The first one is the result of the val script evaluated using coco after it is run.
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.323
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.519
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.363
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.035
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.344
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.251
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.444
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.444
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.069
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.480

The second is the result evaluated using coco after the detect script is run.

 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.230
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.427
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.247
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.039
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.289
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.107
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.281
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.281
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.062
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.322

I also found that there are more anns in the json file produced by the val script than in the json file produced by the detect script. ann is a dictionary like this.

{
        "image_id": 63,
        "category_id": 4,
        "bbox": [
            389.792,
            537.801,
            336.013,
            207.175
        ],
        "score": 0.96367,
        "file_name": "ceng_fu_di_5.bmp",
        "category_name": "L2"
    }

Hope you can get a reply, thank you very much!

Additional

No response

@ThreeStones1029 ThreeStones1029 added the question Further information is requested label Jun 12, 2024
Copy link
Contributor

👋 Hello @ThreeStones1029, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Requirements

Python>=3.8.0 with all requirements.txt installed including PyTorch>=1.8. To get started:

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

YOLOv5 CI

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

Introducing YOLOv8 🚀

We're excited to announce the launch of our latest state-of-the-art (SOTA) object detection model for 2023 - YOLOv8 🚀!

Designed to be fast, accurate, and easy to use, YOLOv8 is an ideal choice for a wide range of object detection, image segmentation and image classification tasks. With YOLOv8, you'll be able to quickly and accurately detect objects in real-time, streamline your workflows, and achieve new levels of accuracy in your projects.

Check out our YOLOv8 Docs for details and get started with:

pip install ultralytics

@glenn-jocher
Copy link
Member

@ThreeStones1029 hello,

Thank you for your detailed report and for providing the reproduction steps. To help us investigate the issue further, could you please provide a minimum reproducible code example? This will allow us to better understand the context and replicate the issue on our end. You can refer to our guide on creating a minimum reproducible example here: Minimum Reproducible Example.

Additionally, please ensure that you are using the latest versions of torch and the YOLOv5 repository. You can update your packages with the following commands:

pip install --upgrade torch
git pull https://github.com/ultralytics/yolov5

There are a few potential reasons for the discrepancy between the detect.py and val.py results:

  1. Post-processing Differences: The detect.py script might apply different post-processing steps compared to val.py. Ensure that the confidence threshold, IoU threshold, and other parameters are consistent between the two scripts.

  2. Evaluation Metrics: The val.py script is specifically designed for evaluation and might include additional metrics or processing steps that are not present in detect.py.

  3. Annotation Differences: As you mentioned, there are more annotations in the JSON file produced by val.py. This could be due to differences in how detections are filtered or processed in each script.

To further diagnose the issue, you can try the following steps:

  1. Consistency Check: Ensure that both scripts are using the same model weights, dataset, and configuration parameters.

  2. Debugging: Add print statements or logging to both scripts to compare the intermediate outputs and identify where the differences arise.

  3. Manual Inspection: Manually inspect a few samples from the JSON files produced by both scripts to understand the nature of the discrepancies.

If you can share the specific code snippets or configurations you are using for both scripts, it would be greatly helpful. We appreciate your patience and cooperation as we work together to resolve this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants