Skip to content

Code for <Uncertainty-Aware Unsupervised Domain Adaptation in Object Detection> in IEEE TMM 2021

License

Notifications You must be signed in to change notification settings

Dayan-Guan/UaDAN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

46 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Uncertainty-Aware Unsupervised Domain Adaptation in Object Detection

Updates

Paper

Uncertainty-Aware Unsupervised Domain Adaptation in Object Detection
Dayan Guan1, Jiaxing Huang1, Xiao Aoran1, Shijian Lu1, Yanpeng Cao2

1School of Computer Science Engineering, Nanyang Technological University, Singapore
2School of Mechanical Engineering, Zhejiang University, Hangzhou, China.

IEEE Transactions on Multimedia, 2021.

If you find this code useful for your research, please cite our paper:

@article{guan2021uncertainty,
  title={Uncertainty-aware unsupervised domain adaptation in object detection},
  author={Guan, Dayan and Huang, Jiaxing and Xiao, Aoran and Lu, Shijian and Cao, Yanpeng},
  journal={IEEE Transactions on Multimedia},
  year={2021},
  publisher={IEEE}
}

Abstract

Unsupervised domain adaptive object detection aims to adapt detectors from a labelled source domain to an unlabelled target domain. Most existing works take a two-stage strategy that first generates region proposals and then detects objects of interest, where adversarial learning is widely adopted to mitigate the inter-domain discrepancy in both stages. However, adversarial learning may impair the alignment of well-aligned samples as it merely aligns the global distributions across domains. To address this issue, we design an uncertainty-aware domain adaptation network (UaDAN) that introduces conditional adversarial learning to align well-aligned and poorly-aligned samples separately in different manners. Specifically, we design an uncertainty metric that assesses the alignment of each sample and adjusts the strength of adversarial learning for well-aligned and poorly-aligned samples adaptively. In addition, we exploit the uncertainty metric to achieve curriculum learning that first performs easier image-level alignment and then more difficult instance-level alignment progressively. Extensive experiments over four challenging domain adaptive object detection datasets show that UaDAN achieves superior performance as compared with state-of-the-art methods.

Installation

conda env create -f environment.yaml
conda activate uadan
python setup.py build develop

Prepare Dataset

  • Pascal VOC: Download Pascal VOC dataset at UaDAN/datasets/voc
  • Clipart1k: Download Clipart1k dataset at UaDAN/datasets/clipart and unzip it (Clipart1k dataset contains 1,000 comical images, in which 800 for training and 200 for validation.)
cp -r tools/dataset/clipart/ImageSets datasets/clipart

Pre-trained models

Pre-trained models can be downloaded here and put in UaDAN/pretrained_models

Evaluation

python tools/test_net.py --config-file "configs/UaDAN_Voc2Clipart.yaml" MODEL.WEIGHT "pretrained_models/UaDAN_Voc2Clipart.pth"
python tools/test_net.py --config-file "configs/UaDAN_City2Vistas.yaml" MODEL.WEIGHT "pretrained_models/UaDAN_City2Vistas.pth"

Training

python tools/train_net.py --config-file "configs/UaDAN_Voc2Clipart.yaml"
python tools/test_net_all.py --config-file "configs/UaDAN_Voc2Clipart.yaml"

Acknowledgements

This codebase is heavily borrowed from maskrcnn-benchmark and Domain-Adaptive-Faster-RCNN-PyTorch

Contact

If you have any questions, please contact: dayan.guan@ntu.edu.sg

About

Code for <Uncertainty-Aware Unsupervised Domain Adaptation in Object Detection> in IEEE TMM 2021

Resources

License

Stars

Watchers

Forks

Packages

No packages published