This is a code demo for the paper "Deep Cross-domain Few-shot Learning for Hyperspectral Image Classification"
Some of our code references the projects
CUDA = 10.0
Python = 3.7
Pytorch = 1.5
sklearn = 0.23.2
numpy = 1.19.2
- target domain data set:
You can download the hyperspectral datasets in mat format at: http://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes, and move the files to ./datasets
folder.
- source domain data set:
The source domain hyperspectral datasets (Chikusei) in mat format is available in:http://park.itc.utokyo.ac.jp/sal/hyperdata.
You can download the preprocessed source domain data set (Chikusei_imdb_128.pickle) directly in pickle format, which is available in "link:https://pan.baidu.com/s/1JrCWJLmPFccfOrSh3P5QEA password:rnjv " , and move the files to ./datasets
folder.
An example dataset folder has the following structure:
datasets
├── Chikusei_imdb_128.pickle
├── IP
│ ├── indian_pines_corrected.mat
│ ├── indian_pines_gt.mat
├── salinas
│ ├── salinas_corrected.mat
│ └── salinas_gt.mat
├── pavia
│ ├── pavia.mat
│ └── pavia_gt.mat
└── paviaU
├── paviaU_gt.mat
└── paviaU.mat
Take DCFSL method on the UP dataset as an example:
- Download the required data set and move to folder
./datasets
. - If you down the source domain data set (Chikusei) in mat format,you need to run the script
Chikusei_imdb_128.py
to generate preprocessed source domain data. - Taking 5 labeled samples per class as an example, run
DAFSC-UP.py --test_lsample_num_per_class 5 --tar_input_dim 103
.
--test_lsample_num_per_class
denotes the number of labeled samples per class for the target domain data set.--tar_input_dim
denotes the number of bands for the target domain data set.