-
-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
prediction box is too small for the object, if bounding boxes is to big Even take up the whole picture, how did i train on this dataset #2949
Comments
👋 Hello @tongchangD, thank you for your interest in YOLOv8 🚀! We recommend a visit to the YOLOv8 Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered. If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it. If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results. InstallPip install the pip install ultralytics EnvironmentsYOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
StatusIf this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit. |
@glenn-jocher please help me |
Hello @tongchangD, thank you for reaching out to us for help. We understand that you are facing difficulty in training large bounding boxes which take up the whole image in YOLOv8. To address this issue, you could try increasing the image size parameter in your training code. Increasing the image size may help to improve the accuracy of predictions and allow the model to capture more details that may be lost in smaller images. Alternatively, you could also try adjusting the anchors in your model configuration. Anchors play a crucial role in defining the size and shape of the detection boxes in your model, changing these values might help to improve the detection of larger objects in your images. We hope this helps! If you have any further questions or concerns, please feel free to reach out to us. |
hello @glenn-jocher thanks you for reply my issuse. I try in difference images size during my training, in my dataset, image size is 720*1280 ,I setting imgsz=640 or imgsz=1280 and masoic=0,0.5,0.7 or 1, I find all big box predict box is small, but imgsz=640 effect batter a little than imgsz=1280 .
� |
Hello @tongchangD, thank you for bringing this to our attention. It sounds like you are experiencing difficulties when training larger objects in YOLOv8 compared to YOLOv5. While YOLOv8 does have anchor-free object detection, it still requires anchor-like reference points to detect objects successfully. In some cases, anchor-free detectors may face difficulties when dealing with larger objects. However, increasing the size of the input image, the input feature maps, and changing the anchor-like reference points' shape and size may help to improve detection results. Regarding your table, YOLOv5 may produce better mAP on objects in your dataset compared to YOLOv8 as each model may have been trained on a different subset of the dataset or may feature different regularization techniques, making it difficult to make a direct comparison. Therefore, we recommend experimenting with different model configurations (hyperparameters, input sizes, feature maps, anchor-like reference points, etc.), or using other object detection models, to identify the best parameters for your specific application. I hope this explanation helps to address your concerns. If you have any further questions or concerns, please do not hesitate to reach out to us. |
you can set the reg_max to 32 or others, big reg_max will get good result with the huge object |
@lesterlee89 yes, you can see in this address |
Hello @lesterlee89, thank you for reaching out to us regarding your query. If you're experiencing issues where the bounding box of large objects in your dataset is too small and not detected correctly, you can try adjusting the The We hope this helps! Please let us know if you have any further questions or concerns. |
Hi @glenn-jocher, I am wondering where we can configure the
I'm unable to find this parameter in the YOLOv8 default.yaml config. Does it need to be modified directly in the source code?
Best regards |
@sanjeevnara7 hi there! Yes, the If you want to modify Remember that after modifying the source code, it's essential to recompile it so the modifications are accounted for in the model. Best regards! |
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help. For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed! Thank you for your contributions to YOLO 🚀 and Vision AI ⭐ |
Search before asking
Question
if bounding boxes is to big Even take up the whole picture, How do I train on this data
now my setting is that:
imgsz = image size/ half of image size
masoic = 0/0.3/0.5/0.7/1
In each setting, the prediction box is too small, like this issuse
The situation is as follows
the red box mean the image size ,the yollow box mean the label box ,but model predict the green box, the predict box is too small.
but yolov5 predict is correct, What should I do or Why does this happen?
Additional
No response
The text was updated successfully, but these errors were encountered: