-
-
Notifications
You must be signed in to change notification settings - Fork 5.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
YOLOv8-OBB learns 0° rotation #13081
Comments
👋 Hello @simoneangarano, thank you for your interest in Ultralytics YOLOv8 🚀! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered. If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it. If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results. Join the vibrant Ultralytics Discord 🎧 community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users. InstallPip install the pip install ultralytics EnvironmentsYOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
StatusIf this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit. |
How are you displaying your bounding boxes? The results object that you get as a return value for predict has several bounding box coordinate types, for example results[0].obb.xyxyxyxy for 4 pairs of xy coords for each corner. |
results[0].plot(). Even the visualizations auto-generated during training (val_batch0_pred.jpg) show bboxes that are only slightly rotated. |
@simoneangarano hello! It sounds like the model might be struggling to learn the angle variations effectively. This could be due to a variety of factors including the diversity of the training data in terms of angles or the specific characteristics of the loss function handling the angle predictions. To better diagnose the issue, you could:
For visualizing the predictions with more clarity on rotations, you might consider manually plotting the bounding boxes using Here's a quick example on how you can plot these manually for more detailed inspection: import matplotlib.pyplot as plt
import numpy as np
# Assuming 'result' is your prediction result for one image
boxes = result.obb.xyxyxyxy # Get the oriented bounding box coordinates
fig, ax = plt.subplots(1)
ax.imshow(result.img) # Plot the image
# Plot each OBB
for box in boxes:
poly = plt.Polygon(np.array(box).reshape(-1, 2), closed=True, edgecolor='r', fill=None)
ax.add_patch(poly)
plt.show() This might help you visually confirm the model's performance on angle predictions more precisely. |
How do I increase the weight for the angle component? I don't see any training hyperparameter that's specific for that. |
@simoneangarano hello! In the current YOLOv8 implementation, direct hyperparameter adjustment for the angle component in the loss function isn't exposed via the training configuration. However, you can modify the source code where the loss is computed to manually increase the weight for the angle component. Here’s a brief guide on how you might approach this:
If you need specific guidance on which file or line to edit, I can help you further if you provide more details about your setup or the version of YOLOv8 you are using. 🚀 |
Thanks! |
Hello! For adjusting the rotation loss in YOLOv8n-OBB, you'll need to dive into the source code where the model and its loss functions are defined. Typically, this would be in the files where the model's forward pass and loss calculations are implemented. Since the exact location can vary with updates, I recommend checking files related to model definitions, possibly named something like If you're not familiar with navigating the codebase, using a search function in your IDE for keywords like "loss" and "angle" or "rotation" might speed things up. 🚀 Hope this helps! If you need more specific pointers, feel free to ask. |
I guess the angles were not being considered in the default loss, as manually adding an additional loss component This is the code I modified in # Cls loss
# loss[1] = self.varifocal_loss(pred_scores, target_scores, target_labels) / target_scores_sum # VFL way
loss[1] = self.bce(pred_scores, target_scores.to(dtype)).sum() / target_scores_sum # BCE
# Bbox loss
if fg_mask.sum():
target_bboxes[..., :4] /= stride_tensor
loss[0], loss[2] = self.bbox_loss(
pred_distri, pred_bboxes, anchor_points, target_bboxes, target_scores, target_scores_sum, fg_mask
)
else:
loss[0] += (pred_angle * 0).sum()
angle_diff = target_bboxes[fg_mask][:,-1] - pred_bboxes[fg_mask][:,-1]
angle_loss = (angle_diff ** 2).mean()
loss[0] *= self.hyp.box # box gain
loss[1] *= self.hyp.cls # cls gain
loss[2] *= self.hyp.dfl # dfl gain
loss[3] = angle_loss * self.hyp.ang
return loss.sum() * batch_size, loss.detach() # loss(box, cls, dfl) |
Hey there! 🚀 Great job on integrating the angle loss into the model! Using MSE for the angle loss is a solid choice as it emphasizes larger errors and is generally well-behaved during optimization. However, you might also consider experimenting with the Smooth L1 Loss, which is less sensitive to outliers than MSE. This can sometimes lead to better performance, especially in cases where the angle variation is large. Here's how you could modify your code to use Smooth L1 Loss for the angle: angle_diff = target_bboxes[fg_mask][:,-1] - pred_bboxes[fg_mask][:,-1]
angle_loss = F.smooth_l1_loss(pred_bboxes[fg_mask][:,-1], target_bboxes[fg_mask][:,-1], reduction='mean') This change might provide a more robust training process with respect to angle predictions. If you're seeing good results and want to contribute, a pull request would be fantastic! The community would definitely benefit from your insights and improvements. 😊 |
Thanks for the help! I will. |
@simoneangarano you're welcome! 😊 If you have any more questions or need further assistance, feel free to reach out. Looking forward to your pull request! 🚀 |
Search before asking
YOLOv8 Component
Train
Bug
When training YOLOv8-OBB on a custom dataset with oriented bounding boxes, the model learns 0° rotation for every prediction, resulting in standard bounding boxes. I guess that the training loss does not penalize the model for predicting wrong angles. Can you please check and tell me how to modify the training code to fix this issue?
Thanks.
Environment
Ultralytics YOLOv8.2.19 🚀 Python-3.9.18 torch-2.3.0+cu121 CUDA:0 (NVIDIA GeForce RTX 3090, 24245MiB)
Setup complete ✅ (24 CPUs, 62.5 GB RAM, 249.4/1832.2 GB disk)
Minimal Reproducible Example
Additional
I cannot share the dataset, but in the training log images the batch visualizations show correct rotated bounding boxes while predictions are not rotated.
Are you willing to submit a PR?
The text was updated successfully, but these errors were encountered: