-
Notifications
You must be signed in to change notification settings - Fork 110
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to use the pretrained model uniformer_base_in1k.pth as my backbone ? #20
Comments
Have you used the latest version? The bug has been fixed in the latest version as follows: UniFormer/video_classification/slowfast/models/uniformer.py Lines 398 to 404 in 53215bf
|
As there is no more activity, I am closing the issue, don't hesitate to reopen it if necessary. |
Ok, thanks. I have applied your model (ImageNet-1K pretrained with Token Labeling (224x224): uniformer_base_tl_224.pth) as the backbone to my visual tracking. But from the current training logs, it seems that your model is not as good as other backbones (such as swinT, Resnet50) in this task |
@hongsheng-Z Can you try Moreover, I am not sure whether you have used the code of the new version, since I have updated the model config, where Besides, |
Someone also meets similar problems because of the wrong model config, but the performance is normal with right config. I suggest you can check the model config ~~ |
By the way, for the downstream tasks, you'd better freeze BN. |
Thank you very much for your careful reply, but I don't know how to freeze BN, can you provide the relevant reference code? |
@hongsheng-Z Hi! Does the new pre-trained model work for your task? |
Yes, it seems it has worked. But I still don't know how to freeze BN, and I'm not sure which BatchNorm layer in the uniformer should be frozen. Thanks for your excellent work. |
@hongsheng-Z Freezing BN is a trick for the downstream task. The BN should be frozen if your batch is too small, like 2 for each GPU for object detection. If your batch is large enough (>8 for each GPU), freezing BN does not help. Besides, you can use SyncBN as well. For freezing BN, you can simply set def train(self, mode=True):
"""Convert the model into training mode while keep normalization layer
freezed."""
super(ResNet, self).train(mode)
self._freeze_stages()
if mode and self.norm_eval:
for m in self.modules():
# trick: eval have effect on BatchNorm only
if isinstance(m, _BatchNorm):
m.eval() |
@hongsheng-Z Hi! Does UniFormer work for your task now? |
yeah! Thank you very much for your patience in replying |
Thanks for your excellent work, I have used it as the backbone for tracking tasks. In order to explain its validity it might be possible to use the structure diagram in your paper such as Figure 3 (may be slightly changed, like SwinT uses Swin Transformer), not sure if this is allowed or not. |
Thanks! Never mind to do it! |
As there is no more activity, I am closing the issue, don't hesitate to reopen it if necessary. |
There are some problems when I use the pre-trained model uniformer_base_in1k.pth as my backbone?
missing keys: ['patch_embed1.norm.weight', 'patch_embed1.norm.bias', 'patch_embed1.proj.weight', 'patch_embed1.proj.bias', 'patch_embed2.norm.weight', .....
unexpected keys: ['model']
The text was updated successfully, but these errors were encountered: