Skip to content

Latest commit

 

History

History
308 lines (284 loc) · 59.3 KB

README.md

File metadata and controls

308 lines (284 loc) · 59.3 KB

Awesome Masked Autoencoders

Contrib PaperNum

Fig. 1. Masked Autoencoders from Kaiming He et al.

Masked Autoencoder (MAE, Kaiming He et al.) has renewed a surge of interest due to its capacity to learn useful representations from rich unlabeled data. Until recently, MAE and its follow-up works have advanced the state-of-the-art and provided valuable insights in research (particularly vision research). Here I list several follow-up works after or concurrent with MAE to inspire future research.

*:octocat: code link, 🌐 project page

Vision

Audio

Graph

Point Cloud

Language (Omitted)

There has been a surge of language research focused on such masking-and-predicting paradigm, e.g. BERT, so I'm not going to report these works.

Miscellaneous