Skip to content

XAI4SAR/SAR-HUB

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SAR-HUB

0. Table of Contents

1. Introduction

This project is for paper "SAR-HUB: Pre-training, Fine-tuning, and Explaining".

https://www.mdpi.com/2072-4292/15/23/5534

1.1 Features

  1. Pre-training: Deep neural networks are trained with large-scale, open-source SAR scene image datasets.

  2. Fine-tuning: The pre-trained DNNs are transferred to diverse SAR downstream tasks.

  3. Explaining: Benefits of SAR pre-trained models in comparison to optical pre-trained models are explained.

We release this repository with reproducibility (open-source code and datasets), generalization (sufficient experiments on different tasks), and explainability (qualitative and quantitative explanations).

1.2 Contributions

  • An optimization method for large-scale SAR image classification is proposed to improve model performance.

  • A novel explanation method is proposed to explain the benefits of SAR pre-trained models qualitatively and quantitatively.

  • The Model-Hub offers a variety of SAR pre-trained models validated on various SAR benchmark datasets.

2. Previously on SAR-HUB

In our previous work, we discussed what, where, and how to transfer effectively in SAR image classification and proposed the SAR image pre-trained model (ResNet-18) based on large-scale SAR scene classification that achieved good performance in SAR target recognition downstream task. We tentatively analyzed the generality and specificity of features in different layers to demonstrate the advantage of SAR pre-trained models.

@article{huang2019,
  title={What, where, and how to transfer in SAR target recognition based on deep CNNs},
  author={Huang, Zhongling and Pan, Zongxu and Lei, Bin},
  journal={IEEE Transactions on Geoscience and Remote Sensing},
  volume={58},
  number={4},
  pages={2324--2336},
  year={2019},
  publisher={IEEE}
}
@article{huang2020,
  title={Classification of large-scale high-resolution SAR images with deep transfer learning},
  author={Huang, Zhongling and Dumitru, Corneliu Octavian and Pan, Zongxu and Lei, Bin and Datcu, Mihai},
  journal={IEEE Geoscience and Remote Sensing Letters},
  volume={18},
  number={1},
  pages={107--111},
  year={2020},
  publisher={IEEE}
}

Based on the preliminary findings in our previous work, we released this SAR-HUB project as a continuous study with the following extensions:

  • To further improve the large-scale SAR scene classification performance and the feature generalization ability, we propose an optimization method with dynamic range adapted Enhancement (DRAE) and mini-batch class imbalanced loss function (mini-CBL).

  • In pre-training, seven popular CNN and Transformer based architectures and three different large-scale SAR scene image datasets are explored, collected in Model-Hub. In fine-tuning, seven different SAR downstream tasks are evaluated.

  • We propose SAR knowledge point (SAR-KP), together with CAM based methods, to explain why the SAR pre-trained models outperform ImageNet and optical remote sensing image pre-trained models in transfer learning.

3. Getting Started

3.1 Requirements

Please refer to requirements for installation.

If you need to conduct experiments of SAR scene classification, target recognition or SAR knowledge point, please download the required dependencies according to here.

If you need to conduct experiments of SAR object detection or sementic segmentation, please refer to object_detection.txt and sementic_segmentation.txt respectively.

3.2 Pre-training

The code are proposed here. We will complete the pivotal code after the paper is accepted.

The file directory tree is as below:

├── dataset
│   ├── BigEarthNet
│   │   ├── BEN-S1-npy
│   ├── OpenSARUrban
│   │   ├── OSU-npy
├── data
│   ├── BEN
│   │   ├── test.txt
│   │   ├── train.txt
│   │   ├── val.txt
│   ├── OSU
│   │   ├── test.txt
│   │   ├── train.txt
│   │   ├── val.txt
├──models
│   ├── build.py
│   ├── swin_load_pretrain.py
│   ├── ...
├──src
│   ├── main.py
│   ├── test.py
│   ├── ...

3.2.1 Data Preparation

BigEarthNet-S1.0: https://bigearth.net/

OpenSARUrban: https://pan.baidu.com/s/1D2TzmUWePYHWtNhuHL7KdQ

Normalize the datasets to 0-1 and store them in dataset/xxx/xxx-npy folder with npy format.

3.2.2 Initialization

We follow this transitive transfer learning to train the SAR models using optical remote sensing pre-trained models.

The ResNet-18 optical remote sensing pre-trained model is given by our previous work [2], and the ResNet-50 and Swin-T optical remote sensing pre-trained models are provided in reference[1]. The other 6 optical remote sensing pre-trained models trained ourselves are uploaded to baidu (code: hypr).

3.2.3 DRAE and mini-CBL

The usage of DRAE with Reinhard-Devlin:

--DRAE Reinhard

The usage of Mini-CBL with Focal Loss:

--loss Mini_CB_FL

An example of training OpenSARUrban with DRAE and mini-CBL using Multiple GPUs:

CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 main.py\
 --if_sto 1 --model ResNet18 \
 --pretrained_path resnet18_I_nwpu_cate45.pth \
 --save_model_path model/ResNet18/ \
 --dataset OpenSARUrban --loss Mini_CB_FL --DRAE PTLS\

If you want to use a single GPU, set CUDA_VISIBLE_DEVICES to the serial number of a single GPU and change --nproc_per_node to 1:

CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc_per_node=1 main.py\
 --if_sto 1 --model ResNet18\
--pretrained_path resnet18_I_nwpu_cate45.pth\
--save_model_path model/ResNet18/\
--dataset OpenSARUrban --loss Mini_CB_FL --DRAE PTLS

In inference stage, please use the command below:

python test.py --pretrained_path ResNet18_TSX.pth --model ResNet18 --dataset OpenSARUrban --DRAE PTLS

3.3 Fine-tuning

The fine-tuning code are under Fine-tuning.

3.3.1 Model Hub

SAR pre-trained models are available as follows:

We provide 3 models under each architecture, which are trained on TerraSAR-X (TSX) dataset, BigEarthNet (BEN) dataset and OpenSARUrban (OSU) dataset respectively.

Backbone Input size Pretrained model Backbone Input size Pretrained model
ResNet18 128×128 baidu (Extraction code:hy18) Google MobileNetV3 128×128 baidu (Extraction code:hymb) Google
ResNet50 128×128 baidu (Extraction code:hy50) Google DenseNet121 128×128 baidu (Extraction code:hyde) Google
ResNet101 128×128 baidu (Extraction code:hy01) Google Swin-T 128×128 baidu (Extraction code:hyst) Google
SENet50 128×128 baidu (Extraction code:hyse) Google Swin-B 128×128 baidu (Extraction code:hysb) Google

3.3.2 SAR Target Recognition

The file directory tree is as below:

├── data
│   ├── FSS
│   │   ├── test.txt
│   │   ├── train.txt
│   ├── MSTAR
│   │   ├── test.txt
│   │   ├── train_10.txt
│   │   ├── train_30.txt
│   │   ├── ...
│   ├── OSS
│   │   ├── ...
├──models
│   ├── build.py
│   ├── swin_load_pretrain.py
│   ├── ...
├──src
│   ├── main.py
│   ├── test.py
│   ├── ...

Data Preparation

FuSARShip: https://radars.ac.cn/web/data/getData?dataType=FUSAR

MSTAR:https://www.sdms.afrl.af.mil/index.php?collection=mstar

OpenSARShip: https://opensar.sjtu.edu.cn/

The train/test splitting settings used in our experiments can be found in data/FSS, data/MSTAR, and data/OSS

Usage of SAR Pre-trained Models

An example of training MSTAR with 10% dataset with ResNet50 model trained on TerraSAR-X dataset:

CUDA_VISIBLE_DEVICES=0 python main.py --model ResNet50 --dataset MSTAR --dataset_sub MSTAR_10 --pre_class 32 --pretrained_path ResNet50_TSX.pth

3.3.3 SAR Object Detection

The object detection are based on MMDetection framework,combining Feature Pyramid Networks (FPN) and Fully Convolutional One Stage (FCOS), and we have not changed any of it. Therefore, we only give the SAR config and __base__ and introduce how to use them.

Data Preparation

SSDD: https://drive.google.com/file/d/1grDw3zbGjQKYPjOxv9-h4WSUctoUvu1O/view

HRSID: https://aistudio.baidu.com/aistudio/datasetdetail/54512

LS-SSDDv1.0: https://radars.ac.cn/web/data/getData?newsColumnId=6b535674-3ce6-43cc-a725-9723d9c7492c

The train/test splitting follow the official settings.

Usage of SAR Pre-trained Models

The file directory tree in MMDetection is as below:

├── mmdetection
│   ├── configs
│   │   ├── SAR
│   │   │   ├── SAR config
│   │   │   │   ├── fcos_r18_caffe_fpn_gn-head_4x4_HRSID.py
│   │   │   │   ├── fcos_r50_caffe_fpn_gn-head_4x4_HRSID.py
│   │   │   │   ├── ...
│   │   │   ├── __base__
│   │   │   │   ├── datasets
│   │   │   │   │   ├── SSDD.py
│   │   │   │   │   ├── HRSID.py
│   │   │   │   │   ├── ...
│   │   │   │   ├── SAR data
│   │   │   │   │   ├── HRSID
│   │   │   │   │   │   ├── train.json
│   │   │   │   │   │   ├── test.json
│   │   │   │   │   ├── ...
│   │   │   │   ├── schedules
│   │   │   │   │   ├── schedule_1x.py
  • Train

The training procedure is the same as the official MMDetection's. You can use the command below to start a training procedure:

CUDA_VISIBLE_DEVICES=3 python tools/train.py mmdetection-master/configs/SAR/SAR config/fcos_r18_caffe_fpn_gn-head_4x4_HRSID.py

3.3.4 SAR Semantic Segmentation

We adopt DeepLabv3 under MMSegmentation framework during the experiments. Similar to the object detection task, we give the SAR config and __base__ and introduce how to use them.

Data Preparation

SpaceNet6: https://spacenet.ai/sn6-challenge/

Usage of SAR Pre-trained Models

The file directory tree in MMSegmentation is as below:

├── MMSegmentation
│   ├── configs
│   │   ├── SAR
│   │   │   ├── SAR config
│   │   │   │   ├── deeplabv3_r18_20k_SN6.py
│   │   │   │   ├── deeplabv3_r50_20k_SN6.py
│   │   │   │   ├── ...
│   │   │   ├── __base__
│   │   │   │   ├── datasets
│   │   │   │   │   ├── mvoc.py
│   │   │   │   ├── SAR data
│   │   │   │   │   ├── train.txt
│   │   │   │   │   ├── test.txt
│   │   │   │   ├── schedules
│   │   │   │   │   ├── schedule_20k.py
  • Train

The training procedure is the same as the official MMSegmentation's. You can use the command below to start a training procedure:

CUDA_VISIBLE_DEVICES=3 python tools/train.py configs/SAR/SAR config/deeplabv3_d121_20k_SN6.py

3.4 Explaining

The code are in Explaining. We will complete the pivotal code after the paper is accepted.

(1) U-Net explainer optimization:

python train.py --model_path ResNet50_OSU.pth --tensorboard knowledge_point_SAR --save_path models/ResNet50_Unet/ --sample_time 15 

(2) Get disturbance:

python test_disturbance.py --unet_path UNet_TSX_MSTAR.pth --resnet_path ResNet50_OSU.pth --img HB03335.000_Mag.npy

The visualization of the disturbance and its corrsponding npy will be generated. An example is given below.

(3) Get knowledge point.

python KP_visual.py --b 0.3 --img HB03335.000_Mag.npy --disturbance_npy HB03335.000_Mag_dis.npy --save_path img/KP_0.3/

The visualization results of the knowledge points will be generated.

4. Contributors

In this repository, we implemented the ResNet series, DenseNet121, MobileNetV3, SENet50 and Swin series. The datasets we used contain TerraSAR-X, BigEarthNet-S1.0, openSARUrban, MSTAR, FuSARShip, OpenSARShip, SSDD, LS-SSDDv1.0, HRSID and SpaceNet6. Besides, we reimplemented FCOS on PyTorch based on MMDetection and Deeplabv3 based on MMSegmentation. Thanks for all the above works' contribution.

5. Citation

If you find this repository useful for your publications, please consider citing our paper.

@Article{rs15235534,
AUTHOR = {Yang, Haodong and Kang, Xinyue and Liu, Long and Liu, Yujiang and Huang, Zhongling},
TITLE = {SAR-HUB: Pre-Training, Fine-Tuning, and Explaining},
JOURNAL = {Remote Sensing},
VOLUME = {15},
YEAR = {2023},
NUMBER = {23},
URL = {https://www.mdpi.com/2072-4292/15/23/5534},
ISSN = {2072-4292},
DOI = {10.3390/rs15235534}
}

About

Project for paper "SAR-HUB: Pre-training, Fine-tuning, and Explaining"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages