Skip to content

Code for "CDAC: Cross-domain Attention Consistency in Transformer for Domain Adaptive Semantic Segmentation" at ICCV 2023.

Notifications You must be signed in to change notification settings

wangkaihong/CDAC

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CDAC: Cross-domain Attention Consistency in Transformer for Domain Adaptive Semantic Segmentation

Official release of the source code for CDAC: Cross-domain Attention Consistency in Transformer for Domain Adaptive Semantic Segmentation at ICCV 2023.

Overview

We propose Cross-Domain Attention Consistency (CDAC), to perform adaptation on attention maps using cross-domain attention layers that share features between source and target domains. Specifically, we impose consistency between predictions from cross-domain attention and self-attention modules to encourage similar distributions across domains in both the attention and output of the model, i.e., attention-level and output-level alignment. We also enforce consistency in attention maps between different augmented views to further strengthen the attention-based alignment. Combining these two components, CDAC mitigates the discrepancy in attention maps across domains and further boosts the performance of the transformer under unsupervised domain adaptation settings. Our method is evaluated on various widely used benchmarks and outperforms the state-of-the-art baselines, including GTAV-to-Cityscapes by 1.3 and 1.5 percent point (pp) and Synthia-to-Cityscapes by 0.6 pp and 2.9 pp when combining with two competitive Transformer-based backbones, respectively.

Installation and Data Preparation

Since our model is primarily built on the basis of DAFormer, please refer to the Setup Environment and the Setup Datasets section in the original repo for instructions to set up the environment and prepare for the datasets.

Training

For training our model on GTAV->Cityscapes:

python run_experiments.py --config configs/cdac/gta2cs_uda_dacs_cda_mitb5_b2_s0.py

For training our model on Synthia->Cityscapes:

python run_experiments.py --config configs/cdac/synthia2cs_uda_dacs_cda_mitb5_b2_s0.py

For training our model on Cityscapes->ACDC:

python run_experiments.py --config configs/cdac/cs2acdc_uda_dacs_cda_mitb5_b2_s0.py

Testing

Our models pretrained on the three benchmarks are also saved and available online. Please kindly find them here. After downloading the files, please run the following command:

sh test.sh path/to/checkpoint_directory

Acknowledgements

The code of this project is heavily borrowed from DAFormer and its dependent repo. We thank their authors for making the source code publically available.

About

Code for "CDAC: Cross-domain Attention Consistency in Transformer for Domain Adaptive Semantic Segmentation" at ICCV 2023.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published