-
-
Notifications
You must be signed in to change notification settings - Fork 5.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Evaluation metrics implementation VS pycocotools #13363
Comments
Hello, Thank you for reaching out with your query regarding the differences in evaluation metrics between pycocotools and Ultralytics' built-in The discrepancies you're observing might be due to several factors, including differences in the IoU thresholds, area ranges, and maximum detections (maxDets) settings used during evaluation. Ultralytics' To align the evaluation metrics more closely with those provided by pycocotools, you can adjust the IoU thresholds and other relevant parameters in the YOLOv8 validation configuration to match those used by pycocotools. This should help in achieving a fairer comparison between different models. If you need specific guidance on how to adjust these settings or further assistance, please feel free to ask! |
Yes whats the parameters used by ultralytics and how can i replicate on pycocotools in another repo? Or, if its easier, how do I replicate pycococools by changing ultralytics implementation? |
Hello! To align the evaluation metrics between Ultralytics and pycocotools, you can adjust the parameters in Ultralytics' validation settings to match those typically used by pycocotools. Here are the key parameters you might consider:
To adjust these settings in Ultralytics, you can modify the validation configuration file or pass these parameters directly through the CLI or Python API. For example: model.val(data='dataset.yaml', imgsz=640, conf=0.25, iou_thres=0.6, max_det=100) This should help you achieve comparable evaluation metrics between the two tools. If you need more specific adjustments or further assistance, please let me know! 🚀 |
Thank you. |
Hello, Thank you for reaching out! To help us investigate the issue effectively, could you please provide a minimum reproducible code example? This will allow us to replicate the problem on our end and work towards a solution. You can find guidelines on how to create a minimum reproducible example here. Additionally, please ensure that you are using the latest versions of pip install --upgrade torch
pip install --upgrade ultralytics Once you've updated your packages and provided the reproducible code, we'll be able to dive deeper into the issue. If you have any other questions or need further assistance, feel free to ask! 😊 |
Search before asking
Question
The map50 and map50:95 results i get by running pycocotools and ultralytic's built in model.val() is very different, and while most of the times they correlate, this is not always the case. What am I missing here?
I need to have a fair comparison VS other models that use pycocotools as evaluation.
Additional
For pycocotools one would get a full printout like:
Is there anyway I can reproduce with ultralytic's implemented models?
The text was updated successfully, but these errors were encountered: