Skip to content

[CVPR 2023 Highlight] Masked Image Modeling with Local Multi-Scale Reconstruction

Notifications You must be signed in to change notification settings

Haoqing-Wang/LocalMIM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Masked Image Modeling with Local Multi-Scale Reconstruction

PyTorch implementation of
Masked Image Modeling with Local Multi-Scale Reconstruction
CVPR 2023 Highlight (Top 2.6%)

Masked Image Modeling (MIM) achieves outstanding success in self-supervised representation learning. Unfortunately, MIM models typically have huge computational burden and slow learning process, which is an inevitable obstacle for their industrial applications. Although the lower layers play the key role in MIM, existing MIM models conduct reconstruction task only at the top layer of encoder. The lower layers are not explicitly guided and the interaction among their patches is only used for calculating new activations. Considering the reconstruction task requires non-trivial inter-patch interactions to reason target signals, we apply it to multiple local layers including lower and upper layers. Further, since the multiple layers expect to learn the information of different scales, we design local multi-scale reconstruction, where the lower and upper layers reconstruct fine-scale and coarse-scale supervision signals respectively. This design not only accelerates the representation learning process by explicitly guiding multiple layers, but also facilitates multi-scale semantical understanding to the input. Extensive experiments show that with significantly less pre-training burden, our model achieves comparable or better performance on classification, detection and segmentation tasks than existing MIM models.

Pre-Trained Models

Backbone #Params Target GPU Hours/Ep. PT Epoch PT Resolution PT log/ckpt Top-1 (%)
ViT-B 86M HOG 0.7 1600 224x224 log/ckpt 84.0
ViT-L 307M HOG 1.0 800 224x224 log/ckpt 85.8
Swin-B 88M Pixel 1.0 400 224x224 log/ckpt 84.0
Swin-B 88M HOG 1.1 400 224x224 log/ckpt 84.1
Swin-L 197M HOG 1.6 800 224x224 log/ckpt 85.6

The pre-training and fine-tuning instruction can be found in ViT, Swin and semantic_segmentation.

Citation

If you find this project useful in your research, please consider cite:

@inproceedings{wang2023masked,
  title={Masked Image Modeling with Local Multi-Scale Reconstruction},
  author={Wang, Haoqing and Tang, Yehui and Wang, Yunhe and Guo, Jianyuan and Deng, Zhi-Hong and Han, Kai},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={2122--2131},
  year={2023}
}

Acknowledgement

This code is built upon the implementation from MAE, GreenMIM, MMSeg and BEiT.

About

[CVPR 2023 Highlight] Masked Image Modeling with Local Multi-Scale Reconstruction

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages