Design a patches masked autoencoder by CNN
-
Updated
Jun 6, 2024 - Python
Design a patches masked autoencoder by CNN
Enhancing Representation Learning in Masked Autoencoders by Focusing on Low-Variance Components
R-MAE: Pre-training LiDAR Perception with Masked Autoencoders for Ultra-Efficient 3D Sensing
An optimized implementation of spatiotemporal masked autoencoders
Pre-training a Masked Autoencoder with the idea of Diffusion Models for Hyperspectral Image Classification.
Investigate possibilities for Vision Transformers with multiscale grids
Train MAE on Kaggle 2 GPUs (T4x2), Log to Wandb
PyTorch wrapper for Deep Density Estimation Models
code for "AdPE: Adversarial Positional Embeddings for Pretraining Vision Transformers via MAE+"
A comprehensive (masked) graph autoencoders benchmark.
Generative modeling and representation learning through reconstruction
HSIMAE: A Unified Masked Autoencoder with large-scale pretraining for Hyperspectral Image Classification
TorchGeo: datasets, transforms, and models for geospatial data
A robust embodied navigation agent to various visual corruptions.
PyTorch implementation of MADE
The code for the paper "Contrastive Masked Autoencoders for Self-Supervised Video Hashing" (AAAI'23)
AG-MAE: Anatomically Guided Spatio-Temporal Masked Auto-Encoder for Online Hand Gesture Recognition
[TGRS 2024] PEMAE: Pixel-Wise Ensembled Masked Autoencoder for Multispectral Pan-Sharpening
"Attention, Mask, and Recommendation: A Multi-Level Graph Structure-Aware Method"
Add a description, image, and links to the masked-autoencoder topic page so that developers can more easily learn about it.
To associate your repository with the masked-autoencoder topic, visit your repo's landing page and select "manage topics."