Skip to content

wangle53/TransCD

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

38 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

image

Requirements

Python 3.7.0  
Pytorch 1.6.0  
Visdom 0.1.8.9  
Torchvision 0.7.0

Datasets

Pretrained Model

Pretrained models for CDNet-2014 and VL-CMU-CD are available. You can download them from the following link.

  • CDNet-2014: [Baiduyun] the password is 78cp. [GoogleDrive].
    • We uploaded six models trained on CDNet-2014 dataset, they are SViT_E1_D1_16, SViT_E1_D1_32, SViT_E4_D4_16, SViT_E4_D4_32, Res_SViT_E1_D1_16 and Res_SViT_E4_D4_16.
  • VL-CMU-CD: [Baiduyun] the password is ydzl. [GoogleDrive].
    • We uploaded four models trained on VL-CMU-CD dataset, they are SViT_E1_D1_16, SViT_E1_D1_32, Res_SViT_E1_D1_16 and Res_SViT_E1_D1_32.

Test

Before test, please download datasets and pretrained models. Copy pretrained models to folder './dataset_name/outputs/best_weights', and run the following command:

cd TransCD_ROOT
python test.py --net_cfg <net name> --train_cfg <training configuration>

Use --save_changemap True to save predicted changemaps. For example:

python test.py --net_cfg SViT_E1_D1_32 --train_cfg CDNet_2014 --save_changemap True

Training

Before training, please download datasets and revise dataset path in configs.py to your path. CD TransCD_ROOT

python -m visdom.server
python train.py --net_cfg <net name> --train_cfg <training configuration>

For example:

python -m visdom.server
python train.py --net_cfg Res_SViT_E1_D1_16 --train_cfg VL_CMU_CD

To display training processing, open 'http://localhost:8097' in your browser.

Citing TransCD

If you use this repository or would like to refer the paper, please use the following BibTex entry.

@article{Wang:21,
author = {Zhixue Wang and Yu Zhang and Lin Luo and Nan Wang},
journal = {Opt. Express},
keywords = {Feature extraction; Neural networks; Object detection; Segmentation; Spatial resolution; Vision modeling},
number = {25},
pages = {41409--41427},
publisher = {OSA},
title = {TransCD: scene change detection via transformer-based architecture},
volume = {29},
month = {Dec},
year = {2021},
url = {http://www.osapublishing.org/oe/abstract.cfm?URI=oe-29-25-41409},
doi = {10.1364/OE.440720},
}

Reference

-Akcay, Samet, Amir Atapour-Abarghouei, and Toby P. Breckon. "Ganomaly: Semi-supervised anomaly detection via adversarial training." Asian conference on computer vision. Springer, Cham, 2018.
-Chen, Jieneng, et al. "Transunet: Transformers make strong encoders for medical image segmentation." arXiv preprint arXiv:2102.04306 (2021).

More

My personal google web

About

A Transformer-based model for scene change detection

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages