Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

yolov8_obb val appear large error predict boxes #13345

Open
1 task done
111hyq111 opened this issue Jun 4, 2024 · 6 comments
Open
1 task done

yolov8_obb val appear large error predict boxes #13345

111hyq111 opened this issue Jun 4, 2024 · 6 comments
Labels
question Further information is requested

Comments

@111hyq111
Copy link

111hyq111 commented Jun 4, 2024

Search before asking

Additional

No response

@111hyq111 111hyq111 added the question Further information is requested label Jun 4, 2024
Copy link

github-actions bot commented Jun 4, 2024

👋 Hello @111hyq111, thank you for your interest in Ultralytics YOLOv8 🚀! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered.

If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Join the vibrant Ultralytics Discord 🎧 community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users.

Install

Pip install the ultralytics package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.

pip install ultralytics

Environments

YOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

Ultralytics CI

If this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

@glenn-jocher
Copy link
Member

Hello,

Thank you for reaching out and for checking the existing resources before posting your query.

It sounds like you're experiencing issues with large error margins in the bounding boxes during validation with YOLOv8 OBB. To better assist you, could you please provide a bit more detail:

  1. Sample Images or Outputs: If possible, share some examples where the predictions are significantly off.
  2. Model Configuration: Details about the model configuration and any specific parameters you've adjusted.
  3. Training Data: Information about the training dataset and whether the annotations might have inconsistencies.

These details will help us understand the issue more clearly and provide you with a more accurate solution.

Looking forward to your response!

@111hyq111
Copy link
Author

Hello,

Thank you for reaching out and for checking the existing resources before posting your query.

It sounds like you're experiencing issues with large error margins in the bounding boxes during validation with YOLOv8 OBB. To better assist you, could you please provide a bit more detail:

1. **Sample Images or Outputs**: If possible, share some examples where the predictions are significantly off.

2. **Model Configuration**: Details about the model configuration and any specific parameters you've adjusted.

3. **Training Data**: Information about the training dataset and whether the annotations might have inconsistencies.

These details will help us understand the issue more clearly and provide you with a more accurate solution.

Looking forward to your response!

val_batch1_labels
val_batch1_pred

@glenn-jocher
Copy link
Member

Hello,

Thank you for providing the images. It's clear from the visuals that there's a notable discrepancy in the bounding box predictions.

To further diagnose and address the issue, could you please provide the following additional details:

  1. Model Configuration: Could you share the .yaml configuration file or any specific parameters you've adjusted?
  2. Training Process: Information about the number of epochs, batch size, and any augmentation techniques used during training.
  3. Validation Setup: Details on how you're performing validation (e.g., specific command or script used).

These insights will help us pinpoint the root cause and guide you towards a potential solution. Looking forward to your response!

@111hyq111
Copy link
Author

Hello,

Thank you for providing the images. It's clear from the visuals that there's a notable discrepancy in the bounding box predictions.

To further diagnose and address the issue, could you please provide the following additional details:

1. **Model Configuration**: Could you share the `.yaml` configuration file or any specific parameters you've adjusted?

2. **Training Process**: Information about the number of epochs, batch size, and any augmentation techniques used during training.

3. **Validation Setup**: Details on how you're performing validation (e.g., specific command or script used).

These insights will help us pinpoint the root cause and guide you towards a potential solution. Looking forward to your response!

Model Configuration is this:

Ultralytics YOLO 🚀, AGPL-3.0 license

YOLOv8 Oriented Bounding Boxes (OBB) model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect

Parameters

nc: 4 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n'

[depth, width, max_channels]

n: [0.33, 0.25, 1024] # YOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPs
s: [0.33, 0.50, 1024] # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients, 28.8 GFLOPs
m: [0.67, 0.75, 768] # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients, 79.3 GFLOPs
l: [1.00, 1.00, 512] # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPs
x: [1.00, 1.25, 512] # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs

YOLOv8.0n backbone

backbone:

[from, repeats, module, args]

  • [-1, 1, Conv, [64, 3, 2]] # 0-P1/2
  • [-1, 1, Conv, [128, 3, 2]] # 1-P2/4
  • [-1, 3, C2f, [128, True]]
  • [-1, 1, Conv, [256, 3, 2]] # 3-P3/8
  • [-1, 6, C2f, [256, True]]
  • [-1, 1, Conv, [512, 3, 2]] # 5-P4/16
  • [-1, 6, C2f, [512, True]]
  • [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32
  • [-1, 3, C2f, [1024, True]]
  • [-1, 1, SPPF, [1024, 5]] # 9

YOLOv8.0n head

head:

  • [-1, 1, nn.Upsample, [None, 2, "nearest"]]

  • [[-1, 6], 1, Concat, [1]] # cat backbone P4

  • [-1, 3, C2f, [512]] # 12

  • [-1, 1, nn.Upsample, [None, 2, "nearest"]]

  • [[-1, 4], 1, Concat, [1]] # cat backbone P3

  • [-1, 3, C2f, [256]] # 15 (P3/8-small)

  • [-1, 1, Conv, [256, 3, 2]]

  • [[-1, 12], 1, Concat, [1]] # cat head P4

  • [-1, 3, C2f, [512]] # 18 (P4/16-medium)

  • [-1, 1, Conv, [512, 3, 2]]

  • [[-1, 9], 1, Concat, [1]] # cat head P5

  • [-1, 3, C2f, [1024]] # 21 (P5/32-large)

  • [[15, 18, 21], 1, OBB, [nc, 1]] # OBB(P3, P4, P5)

Training Process is this:
model = YOLO("/media/hyq/西部数据2TB/ultralytics/aoi_config/yolov8-obb.yaml").load("/media/hyq/西部数据2TB/ultralytics/yolov8n.pt")
results = model.train(data="/media/hyq/西部数据2TB/ultralytics/aoi_config/dota8.yaml", epochs=100, imgsz=1024,batch=4)

dota8.yaml is this:

Ultralytics YOLO 🚀, AGPL-3.0 license

DOTA8 dataset 8 images from split DOTAv1 dataset by Ultralytics

Documentation: https://docs.ultralytics.com/datasets/obb/dota8/

Example usage: yolo train model=yolov8n-obb.pt data=dota8.yaml

parent

├── ultralytics

└── datasets

└── dota8 ← downloads here (1MB)

Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]

path: /media/hyq/西部数据2TB/ultralytics/data # dataset root dir
train: images/train # train images (relative to 'path') 4 images
val: images/val # val images (relative to 'path') 4 images

Classes for DOTA 1.0

names:
0: component
1: pin
2: text
3: logo

Download script/URL (optional)

download: https://github.com/ultralytics/yolov5/releases/download/v1.0/dota8.zip

Validation Setup is this:
model1 = YOLO("/media/hyq/西部数据2TB/ultralytics/runs/obb/train/weights/best.pt") # load an official model
results1 = model1("/media/hyq/西部数据2TB/dota/images/val/10799.png",imgsz=1024,save=True)

@glenn-jocher
Copy link
Member

@111hyq111 thank you for providing the detailed configuration and setup information. It helps clarify the setup you're working with.

From the details you've shared, your configuration and training setup seem appropriate for the task. However, the discrepancies in the bounding box predictions during validation might be influenced by several factors:

  1. Model Overfitting: Given the small dataset size mentioned in your dota8.yaml, the model might be overfitting to the training data. Consider using more data or applying data augmentation techniques to increase the generalizability of the model.

  2. Learning Rate and Epochs: The learning rate and number of epochs could also impact the model's performance. If the learning rate is too high or too low, or if the model is not trained for enough epochs, it might not converge to a good solution.

  3. Annotation Quality: Ensure that the annotations in your dataset are accurate and consistent. Errors in the training data annotations can lead to poor model performance.

  4. Post-Processing: Check the post-processing steps during prediction. Sometimes, incorrect settings for thresholding or non-max suppression can lead to poor results.

To further diagnose the issue, you might consider:

  • Visualizing the training loss and validation metrics over epochs to check for signs of overfitting.
  • Experimenting with different learning rates or more sophisticated learning rate schedules.
  • Increasing the diversity and size of your training dataset.

If the issue persists, please consider sharing more specific logs or error messages that might be appearing during training or validation. This could provide further insights into what might be going wrong.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants