Skip to content

HSIMAE: A Unified Masked Autoencoder with large-scale pretraining for Hyperspectral Image Classification

License

Notifications You must be signed in to change notification settings

Ryan21wy/HSIMAE

Repository files navigation

HSIMAE:A Unified Masked Autoencoder with Large-scale Pretraining for Hyperspectral Image Classification

figure 2

figure 4

✨ Highlights

Large-Scale and Diverse Dataset for HSI Pretraining

A large and diverse HSI dataset named HSIHybrid was curated for large-scale HSI pre-training. It consisted of 15 HSI datasets from different hyperspectral sensors. After splitting into image patches, a total of 4 million HSI patches with a spatial size of 9×9 were obtained.

A group-wise PCA was used to extract features of HSI spectra and transform the raw spectra to fixed-length features.

New MAE Architecture for HSI domain

A modified MAE named HSIMAE that utilized separate spatial-spectral encoders followed by fusion blocks to learn spatial correlation and spectral correlation of HSI data was proposed.

Dual-branch finetuning to leverage unlabeled data of target dataset

A dual-branch fine-tuning framework was introduced to leverage the unlabeled data of the downstream HSI dataset and suppressed overfitting on small training samples.

🔨 Installation

  1. Install Anaconda or Miniconda

  2. Install Git

  3. Open commond line, create environment and enter with the following commands:

     conda create -n HSIMAE python=3.8
     conda activate HSIMAE
    
  4. Clone the repository and enter:

     git clone https://github.com/Ryan21wy/HSIMAE.git
     cd HSIMAE
    
  5. Install dependency with the following commands:

     pip install -r requirements.txt
    

🚀 Checkpoint

The pre-training dataset and pretrained models of HSIMAE are provided in Hugging Face.

Because it is too big, HySpecNet-11k need be downloaded from HySpecNet-11k - A Large-Scale Hyperspectral Benchmark Dataset (rsim.berlin)

🧐 Evaluation Results

Classification Dataset:

Salinas: Salinas scene

Pavia University: Pavia University

Houston 2013: 2013 IEEE GRSS Data Fusion Contest

WHU-Hi-LongKou: WHU-Hi: UAV-borne hyperspectral and high spatial resolution (H2) benchmark datasets

Results

Overall accuracy of four HSI classification datasets. The training set and validation set contained 5/10/15/20 random samples per class , respectively, and the remaining samples were considered as the test set.

Training Samples Salinas Pavia University Houston 2013 WHU-Hi-LongKou Average
5 92.99 87.00 83.89 96.16 90.01
10 95.14 96.02 90.14 97.64 94.74
15 96.51 97.09 94.52 98.08 96.55
20 96.62 97.44 95.65 98.41 97.03

✏️ Citation

If you think this project is helpful, please feel free to leave a star⭐️ and cite our paper:

@ARTICLE{10607879,
  author={Wang, Yue and Wen, Ming and Zhang, Hailiang and Sun, Jinyu and Yang, Qiong and Zhang, Zhimin and Lu, Hongmei},
  journal={IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing}, 
  title={HSIMAE: A Unified Masked Autoencoder with Large-scale Pre-training for Hyperspectral Image Classification}, 
  year={2024},
  doi={10.1109/JSTARS.2024.3432743}
}

🧑‍💻 Contact

Wang Yue
E-mail: ryanwy@csu.edu.cn

About

HSIMAE: A Unified Masked Autoencoder with large-scale pretraining for Hyperspectral Image Classification

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages