-
-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance still changes even after layers freezed during training #11808
Comments
👋 Hello @zvant, thank you for your interest in Ultralytics YOLOv8 🚀! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered. If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it. If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results. Join the vibrant Ultralytics Discord 🎧 community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users. InstallPip install the pip install ultralytics EnvironmentsYOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
StatusIf this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit. |
Hello, It's intriguing that you're seeing changes in the pose estimation performance even after freezing the layers. Despite the logs confirming the layers were frozen, there could be a few things going on here. One possibility might involve subtle interactions between the layers not accounted for by simply freezing, especially considering the complex multi-head architecture you're working with. It could be beneficial to double-check that no unexpected updates are being made to parameters or states outside of the detection head during the training. Also, ensure that the training regime (learning rates, batches, data augmentation) remains consistent across the different training sessions as inconsistencies here might indirectly affect the model's behavior even if the layers are nominally frozen. If you haven't already done so, a thorough comparison of pre- and post-training activations for the frozen layers could reveal if they are indeed unchanged. Regarding your workaround using Feel free to share any further observations or code snippets, and I'm certain we can dive deeper into this issue together. Keep experimenting! 🚀 |
@glenn-jocher Thanks for the reply! But the workaround should be good enough for me, for now. |
Hello, Great to hear that the VRAM usage and training speed observations align with the layers being frozen correctly! It sounds like you're on the right track. The changes in performance might indeed be related to factors like exponential moving averages (EMA) or precision conversions that aren't immediately obvious. Your workaround is a smart move to ensure consistency while you explore the underlying cause. If you need to delve deeper into this, checking any involved EMA updates or precision settings during training could provide more insights. Keep up the good work, and don't hesitate to reach out if you have more questions or updates! 🌟 |
Search before asking
Question
I am developing a YOLOv8 model that has 2 heads, 1 for pose estimation, another for detection. I created the model by adding a detection head to the existing pose model:
I also modified the dataset, trainer, validator, and predictor so both heads can work. I copied the weights of the backbone and pose head from the pretrained model
yolov8s-pose.pt
. I only train the detection head by using:From my understanding, only the detection head should change. The log does confirm:
I even checked the weight tensors before and after training, and indeed only the last layer changes. However, I still find that the mAP of pose estimation is different between
yolov8s-pose.pt and
trained.pt`, which should not happen.The only workaround I got is to:
And now the trained model has the same pose mAP as the pretrained model. But I am still confused why freezing layers does not work as intended.
Can some one has similar experiences share their solutions? Thanks.
Additional
No response
The text was updated successfully, but these errors were encountered: