Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance still changes even after layers freezed during training #11808

Closed
1 task done
zvant opened this issue May 9, 2024 · 4 comments
Closed
1 task done

Performance still changes even after layers freezed during training #11808

zvant opened this issue May 9, 2024 · 4 comments
Labels
question Further information is requested

Comments

@zvant
Copy link

zvant commented May 9, 2024

Search before asking

Question

I am developing a YOLOv8 model that has 2 heads, 1 for pose estimation, another for detection. I created the model by adding a detection head to the existing pose model:

                   from  n    params  module                                       arguments
  0                  -1  1       928  ultralytics.nn.modules.conv.Conv             [3, 32, 3, 2]
  1                  -1  1     18560  ultralytics.nn.modules.conv.Conv             [32, 64, 3, 2]
  2                  -1  1     29056  ultralytics.nn.modules.block.C2f             [64, 64, 1, True]
  3                  -1  1     73984  ultralytics.nn.modules.conv.Conv             [64, 128, 3, 2]
  4                  -1  2    197632  ultralytics.nn.modules.block.C2f             [128, 128, 2, True]
  5                  -1  1    295424  ultralytics.nn.modules.conv.Conv             [128, 256, 3, 2]
  6                  -1  2    788480  ultralytics.nn.modules.block.C2f             [256, 256, 2, True]
  7                  -1  1   1180672  ultralytics.nn.modules.conv.Conv             [256, 512, 3, 2]
  8                  -1  1   1838080  ultralytics.nn.modules.block.C2f             [512, 512, 1, True]
  9                  -1  1    656896  ultralytics.nn.modules.block.SPPF            [512, 512, 5]
 10                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']
 11             [-1, 6]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 12                  -1  1    591360  ultralytics.nn.modules.block.C2f             [768, 256, 1]
 13                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']
 14             [-1, 4]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 15                  -1  1    148224  ultralytics.nn.modules.block.C2f             [384, 128, 1]
 16                  -1  1    147712  ultralytics.nn.modules.conv.Conv             [128, 128, 3, 2]
 17            [-1, 12]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 18                  -1  1    493056  ultralytics.nn.modules.block.C2f             [384, 256, 1]
 19                  -1  1    590336  ultralytics.nn.modules.conv.Conv             [256, 256, 3, 2]
 20             [-1, 9]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 21                  -1  1   1969152  ultralytics.nn.modules.block.C2f             [768, 512, 1]
 22        [15, 18, 21]  1   2606494  ultralytics.nn.modules.head.Pose             [1, [17, 3], [128, 256, 512]]
 23        [15, 18, 21]  1   2116435  ultralytics.nn.modules.head.Detect           [1, [128, 256, 512]]

I also modified the dataset, trainer, validator, and predictor so both heads can work. I copied the weights of the backbone and pose head from the pretrained model yolov8s-pose.pt. I only train the detection head by using:

model = <model initialization>
model.load_state_dict(<load weights from pretrained model>)

model.train(
    other arguments,
    freeze = 23, # should freeze all layers but the last one, the detection head
)
torch.save(model.state_dict(), 'trained.pt')

From my understanding, only the detection head should change. The log does confirm:

Transferred 482/482 items from pretrained weights
Freezing layer 'model.0.conv.weight'
Freezing layer 'model.0.bn.weight'
Freezing layer 'model.0.bn.bias'
Freezing layer 'model.1.conv.weight'
......
Freezing layer 'model.22.cv4.2.1.conv.weight'
Freezing layer 'model.22.cv4.2.1.bn.weight'
Freezing layer 'model.22.cv4.2.1.bn.bias'
Freezing layer 'model.22.cv4.2.2.weight'
Freezing layer 'model.22.cv4.2.2.bias'
Freezing layer 'model.23.dfl.conv.weight'

I even checked the weight tensors before and after training, and indeed only the last layer changes. However, I still find that the mAP of pose estimation is different between yolov8s-pose.pt and trained.pt`, which should not happen.

The only workaround I got is to:

model = <model initialization>
model.load_state_dict(<load weights from pretrained model>)
model_loaded = copy.deepcopy(model) # create a copy of the pretrained model

model.train(
    other arguments,
    freeze = 23,
)
model_loaded.model.model[23].load_state_dict(model.model.model[23].state_dict()) # load the trained detection head to pretrained model
torch.save(model_loaded.state_dict(), 'trained.pt')

And now the trained model has the same pose mAP as the pretrained model. But I am still confused why freezing layers does not work as intended.

Can some one has similar experiences share their solutions? Thanks.

Additional

No response

@zvant zvant added the question Further information is requested label May 9, 2024
Copy link

github-actions bot commented May 9, 2024

👋 Hello @zvant, thank you for your interest in Ultralytics YOLOv8 🚀! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered.

If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Join the vibrant Ultralytics Discord 🎧 community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users.

Install

Pip install the ultralytics package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.

pip install ultralytics

Environments

YOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

Ultralytics CI

If this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

@glenn-jocher
Copy link
Member

Hello,

It's intriguing that you're seeing changes in the pose estimation performance even after freezing the layers. Despite the logs confirming the layers were frozen, there could be a few things going on here.

One possibility might involve subtle interactions between the layers not accounted for by simply freezing, especially considering the complex multi-head architecture you're working with. It could be beneficial to double-check that no unexpected updates are being made to parameters or states outside of the detection head during the training.

Also, ensure that the training regime (learning rates, batches, data augmentation) remains consistent across the different training sessions as inconsistencies here might indirectly affect the model's behavior even if the layers are nominally frozen.

If you haven't already done so, a thorough comparison of pre- and post-training activations for the frozen layers could reveal if they are indeed unchanged.

Regarding your workaround using deepcopy, it's a clever approach to ensure absolute consistency in non-trained parts of the model, although it ideally shouldn't be necessary if freezing works as intended.

Feel free to share any further observations or code snippets, and I'm certain we can dive deeper into this issue together. Keep experimenting! 🚀

@zvant
Copy link
Author

zvant commented May 9, 2024

@glenn-jocher Thanks for the reply!
I also confirm that the VRAM usage when training with freezed layers is significantly lower than training whole network. And it trains much faster. So I am pretty sure gradients are not calculated for the freezed layers. So there should be some mechanism changing the model's performance that I am not aware of, maybe some sort of EMA or precision conversion.

But the workaround should be good enough for me, for now.

@zvant zvant closed this as completed May 19, 2024
@glenn-jocher
Copy link
Member

Hello,

Great to hear that the VRAM usage and training speed observations align with the layers being frozen correctly! It sounds like you're on the right track. The changes in performance might indeed be related to factors like exponential moving averages (EMA) or precision conversions that aren't immediately obvious.

Your workaround is a smart move to ensure consistency while you explore the underlying cause. If you need to delve deeper into this, checking any involved EMA updates or precision settings during training could provide more insights.

Keep up the good work, and don't hesitate to reach out if you have more questions or updates! 🌟

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants