Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to convert sparse model to dense model? #31

Closed
Mike575 opened this issue May 12, 2023 · 1 comment
Closed

how to convert sparse model to dense model? #31

Mike575 opened this issue May 12, 2023 · 1 comment

Comments

@Mike575
Copy link

Mike575 commented May 12, 2023

After finishing pretrain resnet50, we can get a resnet50 weight file in sparse type. I'd like to use resnet50 (dense type) as my backbone in my other projects. But how to convert sparse model to dense model? Is there any convenient function like SparseEncoder.dense_model_to_sparse in encoder.py ?

@keyu-tian
Copy link
Owner

keyu-tian commented May 12, 2023

Actually we don't need a sparse-to-dense conversion (there is no "sparse"-type weights). You can use the pretrained weight directly like the following code (just like an ImageNet-supervised pretrained resnet50):

import torch, timm
res50 = timm.create_model('resnet50')
state = torch.load('resnet50_1kpretrained_timm_style.pth', 'cpu')
missing_keys, unexpected_keys = res50.load_state_dict(state, strict=False)
assert missing_keys == ['fc.weight', 'fc.bias']
assert len(unexpected_keys) == 0

The reason is we use torch builtin operators (nn.Conv2d, nn.LayerNorm, etc.) to simulate those sparse operators (see /pretrain/encoder.py). So it keeps the same weight formats and names.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants