Skip to content

Vibashan/online-da

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Towards Online Domain Adaptive Object Detection [WACV 2023]

Framework: PyTorch

Vibashan VS, Poojan Oza, Vishal M Patel

[Project Page] [WACV] [pdf] [BibTeX]

Contributions

  • To the best of our knowledge, this is the first work to consider both online and offline adaptation settings for detector models.
  • We propose a novel unified adaptation framework which makes the detector models robust against online target distribution shifts
  • We introduce the MemXformer module, which stores prototypical patterns of the target distribution and provides contrastive pairs to boost contrastive learning on the target domain.

Contents

  1. Installation Instructions
  2. Dataset Preparation
  3. Execution Instructions
  4. Results
  5. Citation

Installation Instructions

  • We use Python 3.6, PyTorch 1.9.0 (CUDA 10.2 build).
  • We codebase is built on Detectron.
conda create -n online_da python=3.6

Conda activate online_da

conda install pytorch==1.9.0 torchvision==0.10.0 torchaudio==0.9.0 cudatoolkit=10.2 -c pytorch

cd online-da
pip install -r requirements.txt

## Make sure you have GCC and G++ version <=8.0
cd ..
python -m pip install -e online-da

Dataset Preparation

  • PASCAL_VOC 07+12: Please follow the instructions in py-faster-rcnn to prepare VOC datasets.
  • Clipart, WaterColor: Dataset preparation instruction link Cross Domain Detection . Images translated by Cyclegan are available in the website.
  • Sim10k: Website Sim10k
  • CitysScape, FoggyCityscape: Download website Cityscape, see dataset preparation code in DA-Faster RCNN

Download all the dataset into "./dataset" folder. The codes are written to fit for the format of PASCAL_VOC. For example, the dataset Sim10k is stored as follows.

$ cd ./dataset/Sim10k/VOC2012/
$ ls
Annotations  ImageSets  JPEGImages
$ cat ImageSets/Main/val.txt
3384827.jpg
3384828.jpg
3384829.jpg
.
.

Execution Instructions

Training

  • Download the source-trained model weights in source_model folder Link
CUDA_VISIBLE_DEVICES=$GPU_ID python tools/train_onlineda_net.py \ 
--config-file configs/online_da/onda_foggy.yaml --model-dir ./source_model/cityscape_baseline/model_final.pth

Evaluation

  • After training, load the teacher model weights and perform evaluation using
CUDA_VISIBLE_DEVICES=$GPU_ID python tools/plain_test_net.py --eval-only \ 
--config-file configs/online_da/foggy_baseline.yaml --model-dir $PATH TO CHECKPOINT

Results

  • Pre-trained models can be downloaded from Link.

Citation

If you found Online DA useful in your research, please consider starring ⭐ us on GitHub and citing 📚 us in your research!

@inproceedings{vs2023towards,
  title={Towards Online Domain Adaptive Object Detection},
  author={VS, Vibashan and Oza, Poojan and Patel, Vishal M},
  booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
  pages={478--488},
  year={2023}
}

Acknowledgement

We thank the developers and authors of Detectron for releasing their helpful codebases.

About

Official Pytorch Codebase for Towards Online Domain Adaptive Object Detection [WACV 2023]

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published