Highlights
- Pro
MIM
[CVPR 2023] Official repository for paper "Stare at What You See: Masked Image Modeling without Reconstruction"
Official code repository for NeurIPS 2022 paper "SatMAE: Pretraining Transformers for Temporal and Multi-Spectral Satellite Imagery"
PyTorch implementation of MAE https//arxiv.org/abs/2111.06377
A collection of literature after or concurrent with Masked Autoencoder (MAE) (Kaiming He el al.).
[ECCV 2022] What to Hide from Your Students: Attention-Guided Masked Image Modeling
Reading list for research topics in Masked Image Modeling
[NeurIPS 2022] code for the paper, SemMAE: Semantic-guided masking for learning masked autoencoders
PyTorch implementation of the paper "MILAN: Masked Image Pretraining on Language Assisted Representation" https://arxiv.org/pdf/2208.06049.pdf.
MixMIM: Mixed and Masked Image Modeling for Efficient Visual Representation Learning
ECCV2022,Bootstrapped Masked Autoencoders for Vision BERT Pretraining
The official implementation of CMAE https://arxiv.org/abs/2207.13532 and https://ieeexplore.ieee.org/document/10330745
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
iBOT 🤖: Image BERT Pre-Training with Online Tokenizer (ICLR 2022)
This is an official implementation for "SimMIM: A Simple Framework for Masked Image Modeling".
A deep learning library for video understanding research.
ConvMAE: Masked Convolution Meets Masked Autoencoders
This is a PyTorch implementation of “Context AutoEncoder for Self-Supervised Representation Learning"
[CVPR 2023]Implementation of Siamese Image Modeling for Self-Supervised Vision Representation Learning
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
Pytorch reimplementation of "A Unified View of Masked Image Modeling".
The official code for the paper Evolved Part Masking for Self-Supervised Learning.
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image





