An optimized implementation of spatiotemporal masked autoencoders
-
Updated
Jul 10, 2024 - Python
An optimized implementation of spatiotemporal masked autoencoders
Investigate possibilities for Vision Transformers with multiscale grids
TorchGeo: datasets, transforms, and models for geospatial data
Change detection on satellite images with masked autoencoders.
An optimized implementation of masked autoencoders (MAEs)
Train MAE on Kaggle 2 GPUs (T4x2), Log to Wandb
The code for the paper "Contrastive Masked Autoencoders for Self-Supervised Video Hashing" (AAAI'23)
Reproducing the MET framework with PyTorch
PyTorch implementation of MADE
R-MAE: Pre-training LiDAR Perception with Masked Autoencoders for Ultra-Efficient 3D Sensing
PyTorch wrapper for Deep Density Estimation Models
Generative modeling and representation learning through reconstruction
code for "AdPE: Adversarial Positional Embeddings for Pretraining Vision Transformers via MAE+"
HSIMAE: A Unified Masked Autoencoder with large-scale pretraining for Hyperspectral Image Classification
Official code for CVPR2024 “VideoMAC: Video Masked Autoencoders Meet ConvNets”
Codebase for Imperial MSc AI Individual Project - Self-Supervised Learning for Audio Inference
Official implementation of Matrix Variational Masked Autoencoder (M-MAE) for ICML paper "Information Flow in Self-Supervised Learning" (https://arxiv.org/abs/2309.17281)
A Vector Quantized Masked AutoEncoder for speech emotion recognition
Repository for model development and training
Design a patches masked autoencoder by CNN
Add a description, image, and links to the masked-autoencoder topic page so that developers can more easily learn about it.
To associate your repository with the masked-autoencoder topic, visit your repo's landing page and select "manage topics."