[NeurIPS 2022 Spotlight] VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training
-
Updated
Dec 8, 2023 - Python
[NeurIPS 2022 Spotlight] VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training
[ICLR'23 Spotlight🔥] The first successful BERT/MAE-style pretraining on any convolutional network; Pytorch impl. of "Designing BERT for Convolutional Networks: Sparse and Hierarchical Masked Modeling"
[ECCV2024] Video Foundation Models & Data for Multimodal Understanding
SimpleClick: Interactive Image Segmentation with Simple Vision Transformers (ICCV 2023)
PyTorch implementation of BEVT (CVPR 2022) https://arxiv.org/abs/2112.01529
reproduction of semantic segmentation using masked autoencoder (mae)
[Survey] Masked Modeling for Self-supervised Representation Learning on Vision and Beyond (https://arxiv.org/abs/2401.00897)
[CVPR2023] Masked Video Distillation: Rethinking Masked Feature Modeling for Self-supervised Video Representation Learning (https://arxiv.org/abs/2212.04500)
[CVPR'23] Hard Patches Mining for Masked Image Modeling
Unofficial PyTorch implementation of Masked Autoencoders that Listen
[SIGIR'2023] "MAERec: Graph Masked Autoencoder for Sequential Recommendation"
Official Implementation of the CrossMAE paper: Rethinking Patch Dependence for Masked Autoencoders
Cross-Sensor Masked Autoencoder for Content Based Image Retrieval in Remote Sensing
Repository for model development and training
Official repo for Recursion's accepted spotlight paper at NeurIPS 2023 Generative AI & Biology workshop.
Codebase for the paper 'EncodecMAE: Leveraging neural codecs for universal audio representation learning'
[NeurIPS 2022 Spotlight] VideoMAE for Action Detection
A PyTorch implementation of "BirdSAT: Cross-View Contrastive Masked Autoencoders for Bird Species Classification and Mapping"
[NeurIPS 2023] Masked Image Residual Learning for Scaling Deeper Vision Transformers
Official implementation of Matrix Variational Masked Autoencoder (M-MAE) for ICML paper "Information Flow in Self-Supervised Learning" (https://arxiv.org/abs/2309.17281)
Add a description, image, and links to the masked-autoencoder topic page so that developers can more easily learn about it.
To associate your repository with the masked-autoencoder topic, visit your repo's landing page and select "manage topics."