Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

missing keys in source state_dict #22

Closed
jiabeiwangTJU opened this issue Apr 27, 2021 · 5 comments
Closed

missing keys in source state_dict #22

jiabeiwangTJU opened this issue Apr 27, 2021 · 5 comments

Comments

@jiabeiwangTJU
Copy link

patch_embed.proj.weight, patch_embed.proj.bias, patch_embed.norm.weight, patch_embed.norm.bias, layers.0.blocks.0.norm1.weight, layers.0.blocks.0.norm1.bias, layers.0.blocks.0.attn.relative_position_bias_table, layers.0.blocks.0.attn.relative_position_index, layers.0.blocks.0.attn.qkv.weight, layers.0.blocks.0.attn.qkv.bias, layers.0.blocks.0.attn.proj.weight, layers.0.blocks.0.attn.proj.bias, layers.0.blocks.0.norm2.weight, layers.0.blocks.0.norm2.bias, layers.0.blocks.0.mlp.fc1.weight, layers.0.blocks.0.mlp.fc1.bias, layers.0.blocks.0.mlp.fc2.weight, layers.0.blocks.0.mlp.fc2.bias, layers.0.blocks.1.norm1.weight, layers.0.blocks.1.norm1.bias, layers.0.blocks.1.attn.relative_position_bias_table, layers.0.blocks.1.attn.relative_position_index, layers.0.blocks.1.attn.qkv.weight, layers.0.blocks.1.attn.qkv.bias, layers.0.blocks.1.attn.proj.weight, layers.0.blocks.1.attn.proj.bias, layers.0.blocks.1.norm2.weight, layers.0.blocks.1.norm2.bias, layers.0.blocks.1.mlp.fc1.weight, layers.0.blocks.1.mlp.fc1.bias, layers.0.blocks.1.mlp.fc2.weight, layers.0.blocks.1.mlp.fc2.bias, layers.0.downsample.reduction.weight, layers.0.downsample.norm.weight, layers.0.downsample.norm.bias, layers.1.blocks.0.norm1.weight, layers.1.blocks.0.norm1.bias, layers.1.blocks.0.attn.relative_position_bias_table, layers.1.blocks.0.attn.relative_position_index, layers.1.blocks.0.attn.qkv.weight, layers.1.blocks.0.attn.qkv.bias, layers.1.blocks.0.attn.proj.weight, layers.1.blocks.0.attn.proj.bias, layers.1.blocks.0.norm2.weight, layers.1.blocks.0.norm2.bias, layers.1.blocks.0.mlp.fc1.weight, layers.1.blocks.0.mlp.fc1.bias, layers.1.blocks.0.mlp.fc2.weight, layers.1.blocks.0.mlp.fc2.bias, layers.1.blocks.1.norm1.weight, layers.1.blocks.1.norm1.bias, layers.1.blocks.1.attn.relative_position_bias_table, layers.1.blocks.1.attn.relative_position_index, layers.1.blocks.1.attn.qkv.weight, layers.1.blocks.1.attn.qkv.bias, layers.1.blocks.1.attn.proj.weight, layers.1.blocks.1.attn.proj.bias, layers.1.blocks.1.norm2.weight, layers.1.blocks.1.norm2.bias, layers.1.blocks.1.mlp.fc1.weight, layers.1.blocks.1.mlp.fc1.bias, layers.1.blocks.1.mlp.fc2.weight, layers.1.blocks.1.mlp.fc2.bias, layers.1.downsample.reduction.weight, layers.1.downsample.norm.weight, layers.1.downsample.norm.bias, layers.2.blocks.0.norm1.weight, layers.2.blocks.0.norm1.bias, layers.2.blocks.0.attn.relative_position_bias_table, layers.2.blocks.0.attn.relative_position_index, layers.2.blocks.0.attn.qkv.weight, layers.2.blocks.0.attn.qkv.bias, layers.2.blocks.0.attn.proj.weight, layers.2.blocks.0.attn.proj.bias, layers.2.blocks.0.norm2.weight, layers.2.blocks.0.norm2.bias, layers.2.blocks.0.mlp.fc1.weight, layers.2.blocks.0.mlp.fc1.bias, layers.2.blocks.0.mlp.fc2.weight, layers.2.blocks.0.mlp.fc2.bias, layers.2.blocks.1.norm1.weight, layers.2.blocks.1.norm1.bias, layers.2.blocks.1.attn.relative_position_bias_table, layers.2.blocks.1.attn.relative_position_index, layers.2.blocks.1.attn.qkv.weight, layers.2.blocks.1.attn.qkv.bias, layers.2.blocks.1.attn.proj.weight, layers.2.blocks.1.attn.proj.bias, layers.2.blocks.1.norm2.weight, layers.2.blocks.1.norm2.bias, layers.2.blocks.1.mlp.fc1.weight, layers.2.blocks.1.mlp.fc1.bias, layers.2.blocks.1.mlp.fc2.weight, layers.2.blocks.1.mlp.fc2.bias, layers.2.blocks.2.norm1.weight, layers.2.blocks.2.norm1.bias, layers.2.blocks.2.attn.relative_position_bias_table, layers.2.blocks.2.attn.relative_position_index, layers.2.blocks.2.attn.qkv.weight, layers.2.blocks.2.attn.qkv.bias, layers.2.blocks.2.attn.proj.weight, layers.2.blocks.2.attn.proj.bias, layers.2.blocks.2.norm2.weight, layers.2.blocks.2.norm2.bias, layers.2.blocks.2.mlp.fc1.weight, layers.2.blocks.2.mlp.fc1.bias, layers.2.blocks.2.mlp.fc2.weight, layers.2.blocks.2.mlp.fc2.bias, layers.2.blocks.3.norm1.weight, layers.2.blocks.3.norm1.bias, layers.2.blocks.3.attn.relative_position_bias_table, layers.2.blocks.3.attn.relative_position_index, layers.2.blocks.3.attn.qkv.weight, layers.2.blocks.3.attn.qkv.bias, layers.2.blocks.3.attn.proj.weight, layers.2.blocks.3.attn.proj.bias, layers.2.blocks.3.norm2.weight, layers.2.blocks.3.norm2.bias, layers.2.blocks.3.mlp.fc1.weight, layers.2.blocks.3.mlp.fc1.bias, layers.2.blocks.3.mlp.fc2.weight, layers.2.blocks.3.mlp.fc2.bias, layers.2.blocks.4.norm1.weight, layers.2.blocks.4.norm1.bias, layers.2.blocks.4.attn.relative_position_bias_table, layers.2.blocks.4.attn.relative_position_index, layers.2.blocks.4.attn.qkv.weight, layers.2.blocks.4.attn.qkv.bias, layers.2.blocks.4.attn.proj.weight, layers.2.blocks.4.attn.proj.bias, layers.2.blocks.4.norm2.weight, layers.2.blocks.4.norm2.bias, layers.2.blocks.4.mlp.fc1.weight, layers.2.blocks.4.mlp.fc1.bias, layers.2.blocks.4.mlp.fc2.weight, layers.2.blocks.4.mlp.fc2.bias, layers.2.blocks.5.norm1.weight, layers.2.blocks.5.norm1.bias, layers.2.blocks.5.attn.relative_position_bias_table, layers.2.blocks.5.attn.relative_position_index, layers.2.blocks.5.attn.qkv.weight, layers.2.blocks.5.attn.qkv.bias, layers.2.blocks.5.attn.proj.weight, layers.2.blocks.5.attn.proj.bias, layers.2.blocks.5.norm2.weight, layers.2.blocks.5.norm2.bias, layers.2.blocks.5.mlp.fc1.weight, layers.2.blocks.5.mlp.fc1.bias, layers.2.blocks.5.mlp.fc2.weight, layers.2.blocks.5.mlp.fc2.bias, layers.2.downsample.reduction.weight, layers.2.downsample.norm.weight, layers.2.downsample.norm.bias, layers.3.blocks.0.norm1.weight, layers.3.blocks.0.norm1.bias, layers.3.blocks.0.attn.relative_position_bias_table, layers.3.blocks.0.attn.relative_position_index, layers.3.blocks.0.attn.qkv.weight, layers.3.blocks.0.attn.qkv.bias, layers.3.blocks.0.attn.proj.weight, layers.3.blocks.0.attn.proj.bias, layers.3.blocks.0.norm2.weight, layers.3.blocks.0.norm2.bias, layers.3.blocks.0.mlp.fc1.weight, layers.3.blocks.0.mlp.fc1.bias, layers.3.blocks.0.mlp.fc2.weight, layers.3.blocks.0.mlp.fc2.bias, layers.3.blocks.1.norm1.weight, layers.3.blocks.1.norm1.bias, layers.3.blocks.1.attn.relative_position_bias_table, layers.3.blocks.1.attn.relative_position_index, layers.3.blocks.1.attn.qkv.weight, layers.3.blocks.1.attn.qkv.bias, layers.3.blocks.1.attn.proj.weight, layers.3.blocks.1.attn.proj.bias, layers.3.blocks.1.norm2.weight, layers.3.blocks.1.norm2.bias, layers.3.blocks.1.mlp.fc1.weight, layers.3.blocks.1.mlp.fc1.bias, layers.3.blocks.1.mlp.fc2.weight, layers.3.blocks.1.mlp.fc2.bias, norm0.weight, norm0.bias, norm1.weight, norm1.bias, norm2.weight, norm2.bias, norm3.weight, norm3.bias

model = dict(
pretrained='/storage/wjb/AlignPS/pretrained/swin_tiny_patch4_window7_224.pth',
backbone=dict(
type='SwinTransformer',
embed_dim=96,
depths=[2, 2, 6, 2],
num_heads=[3, 6, 12, 24],
window_size=7,
mlp_ratio=4.,
qkv_bias=True,
qk_scale=None,
drop_rate=0.,
attn_drop_rate=0.,
drop_path_rate=0.2,
ape=False,
patch_norm=True,
out_indices=(0, 1, 2, 3),
use_checkpoint=False),

@NEUdeep
Copy link

NEUdeep commented May 31, 2021

yes ,i also error like this:
Swin-Transformer-Object-Detection/checkpoints/swin_small_patch4_window7_224.pth
2021-05-31 23:22:46,178 - mmdet - WARNING - The model and loaded state dict do not match exactly

unexpected key in source state_dict: norm.weight, norm.bias, head.weight, head.bias, layers.0.blocks.1.attn_mask, layers.1.blocks.1.attn_mask, layers.2.blocks.1.attn_mask, layers.2.blocks.3.attn_mask, layers.2.blocks.5.attn_mask, layers.2.blocks.7.attn_mask, layers.2.blocks.9.attn_mask, layers.2.blocks.11.attn_mask, layers.2.blocks.13.attn_mask, layers.2.blocks.15.attn_mask, layers.2.blocks.17.attn_mask

missing keys in source state_dict: norm0.weight, norm0.bias, norm1.weight, norm1.bias, norm2.weight, norm2.bias, norm3.weight, norm3.bias

loading annotations into memory...

@hukaixuan19970627
Copy link

@NEUdeep @jiabeiwangTJU I have the same problem as you. Did you solve it?

@wzr0108
Copy link

wzr0108 commented Apr 15, 2022

I have the same problem as you. Did you solve it?

@alpinechipmunk
Copy link

Keep focusing on this problem

@weiyx16
Copy link
Member

weiyx16 commented Aug 15, 2022

yes ,i also error like this: Swin-Transformer-Object-Detection/checkpoints/swin_small_patch4_window7_224.pth 2021-05-31 23:22:46,178 - mmdet - WARNING - The model and loaded state dict do not match exactly

unexpected key in source state_dict: norm.weight, norm.bias, head.weight, head.bias, layers.0.blocks.1.attn_mask, layers.1.blocks.1.attn_mask, layers.2.blocks.1.attn_mask, layers.2.blocks.3.attn_mask, layers.2.blocks.5.attn_mask, layers.2.blocks.7.attn_mask, layers.2.blocks.9.attn_mask, layers.2.blocks.11.attn_mask, layers.2.blocks.13.attn_mask, layers.2.blocks.15.attn_mask, layers.2.blocks.17.attn_mask

missing keys in source state_dict: norm0.weight, norm0.bias, norm1.weight, norm1.bias, norm2.weight, norm2.bias, norm3.weight, norm3.bias

loading annotations into memory...

It's ok, since we need to re-create three normalization functions for FPN, and the attn_mask is not needed.

@weiyx16 weiyx16 closed this as completed Aug 15, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants