VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training
-
Updated
May 25, 2024 - Python
VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training
Implementation of a simple linear regression with single feature
Adversarial Patch defense using SegmentAndComplete (SAC) & Masked AutoEncoder (MAE)
[NeurIPS 2023 (Spotlight)] Uncovering the Hidden Dynamics of Video Self-supervised Learning under Distribution Shifts
Evaluate Video Salient Object Detection Via Python And Cuda !
Investigating Gradient Descent behavior in linear regression
keras implementation of vision transformers
Semi-supervised Object Detection with MAE
Early stages of incorporating self-supervised with algorithm unrolling. Code was written as part of a master's thesis (60 ECTS) at Aalborg University, Denmark.
code for "AdPE: Adversarial Positional Embeddings for Pretraining Vision Transformers via MAE+"
This is a warehouse for MAE-pytorch-models, can be used to train your dataset
[SHREC24] Skeleton-based Self-Supervised Learning For Dynamic Hand Gesture Recognition
A recommendation system for Restaurants!
1st solution for the Webly-supervised Fine-grained Recognition competition in https://www.cvmart.net/race/10412/base
Official code for CVPR2024 “VideoMAC: Video Masked Autoencoders Meet ConvNets”
[ICML 2023] Architecture-Agnostic Masked Image Modeling -- From ViT back to CNN
Efficient Network Traffic Classification via Pre-training Unidirectional Mamba
Add a description, image, and links to the mae topic page so that developers can more easily learn about it.
To associate your repository with the mae topic, visit your repo's landing page and select "manage topics."