Skip to content

This repository implementates 6 frameworks for hyperspectral image classification based on PyTorch and sklearn.

License

Notifications You must be signed in to change notification settings

jiangyuewu/Double-Branch-Dual-Attention-Mechanism-Network

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

38 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Hyperspectral Image Classification

This repository implementates 6 frameworks for hyperspectral image classification based on PyTorch and sklearn.

The detailed results can be seen in the Classification of Hyperspectral Image Based on Double-Branch Dual-Attention Mechanism Network.

Feel free to contact me if you need any further information: lironui@163.com

Some of our code references the projects

If our code is helpful to you, please cite Li R, Zheng S, Duan C, et al. Classification of Hyperspectral Image Based on Double-Branch Dual-Attention Mechanism Network[J]. Remote Sensing, 2020, 12(3): 582.

Requirements:

numpy >= 1.16.5
PyTorch >= 1.3.1
sklearn >= 0.20.4

Datasets:

You can download the hyperspectral datasets in mat format at: http://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes, and move the files to ./datasets folder.

Usage:

  1. Set the percentage of training and validation samples by the load_dataset function in the file ./global_module/generate_pic.py.
  2. Taking the DBDA framework as an example, run ./DBDA/main.py and type the name of dataset.
  3. The classfication maps are obtained in ./DBDA/classification_maps folder, and accuracy result is generated in ./DBDA/records folder.

Network:

network
Figure 1. The structure of the DBDA network. The upper Spectral Branch composed of the dense spectral block and channel attention block is designed to capture spectral features. The lower Spatial Branch constituted by dense spatial block, and spatial attention block is designed to exploit spatial features.

Results:

IP Figure 2. Classification maps for the IP dataset using 3% training samples. (a) False-color image. (b) Ground-truth (GT). (c)–(h) The classification maps with disparate algorithms. UP Figure 3. Classification maps for the UP dataset using 0.5% training samples. (a) False-color image. (b) Ground-truth (GT). (c)–(h) The classification maps with disparate algorithms. SV Figure 4. Classification maps for the UP dataset using 0.5% training samples. (a) False-color image. (b) Ground-truth (GT). (c)–(h) The classification maps with disparate algorithms. BS Figure 5. Classification maps for the BS dataset using 1.2% training samples. (a) False-color image. (b) Ground-truth (GT). (c)–(h) The classification maps with disparate algorithms.

About

This repository implementates 6 frameworks for hyperspectral image classification based on PyTorch and sklearn.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%