Skip to content

AI-Zhpp/FTN

Repository files navigation

Fully Transformer Network for Change Detection of Remote Sensing Images


Paper Links: Fully Transformer Network for Change Detection of Remote Sensing Images

by Tianyu Yan, Zifu Wan, Pingping Zhang*.

Introduction


Recently, change detection (CD) of remote sensing images have achieved great progress with the advances of deep learning. However, current methods generally deliver incomplete CD regions and irregular CD boundaries due to the limited representation ability of the extracted visual features. To relieve these issues, in this work we propose a novel learning framework named Fully Transformer Network (FTN) for remote sensing image CD, which improves the feature extraction from a global view and combines multi-level visual features in a pyramid manner. More specifically, the proposed framework first utilizes the advantages of Transformers in long-range dependency modeling. It can help to learn more discriminative global-level features and obtain complete CD regions. Then, we introduce a pyramid structure to aggregate multi-level visual features from Transformers for feature enhancement. The pyramid structure grafted with a Progressive Attention Module (PAM) can improve the feature representation ability with additional interdependencies through channel attentions. Finally, to better train the framework, we utilize the deeply-supervised learning with multiple boundaryaware loss functions. Extensive experiments demonstrate that our proposed method achieves a new state-of-the-art performance on four public CD benchmarks.

Update


  • 03/17/2023: The code has been updated.

Requirements


  • python 3.5+
  • PyTorch 1.1+
  • torchvision
  • Numpy
  • tqdm
  • OpenCV

Preperations


For using the codes, please download the public change detection datasets (more details are provided in the paper) :

  • LEVIR-CD
  • WHU-CD
  • SYSU-CD
  • Google-CD

The processed datasets can be downloaded at this link.

Then, run the following codes with your GPUs, and you can get the same results in the above paper.

Usage


1. Download pre-trained Swin Transformer models

2. Prepare data

  • Please use utils/split.py to split the images to 224*224 first.
  • Use utils/check.py to check if the labels are binary form. Info will be printed if your label form is incorrect.
  • Use utils/bimap.py if the labels are not binary.
  • You may need to move the aforementioned files to corresponding places.

3. Train/Test

  • For training, run:
python train_(name of the dataset).py
  • For prediction, run:
python test_swin.py 
  • For evaluation, run:
python deal_evaluation.py 

Reference


Contact


If you have any problems. Please concat

QQ: 1580329199

Email: tianyuyan2001@gmail.com or wanzifu2000@gmail.com

Citation


If you find our work helpful to your research, please cite with:

@InProceedings{Yan_2022_ACCV,
    author    = {Yan, Tianyu and Wan, Zifu and Zhang, Pingping},
    title     = {Fully Transformer Network for Change Detection of Remote Sensing Images},
    booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)},
    month     = {December},
    year      = {2022},
    pages     = {1691-1708}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages