Skip to content
Implementation of "Strong-Weak Distribution Alignment for Adaptive Object Detection"
Python C Cuda Other
Branch: master
Clone or download
Latest commit 730eaca Apr 20, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
cfgs first commit Mar 24, 2019
docs first commit Mar 24, 2019
lib fixed context Apr 20, 2019
test_scripts add test script Mar 28, 2019
train_scripts add test script Mar 28, 2019
.gitignore change path Mar 30, 2019
LICENSE first commit Mar 24, 2019
README.md update readme Apr 10, 2019
_init_paths.py fix bugs, update Readme Mar 29, 2019
demo_global.py delete init_path.py Mar 28, 2019
requirements.txt first commit Mar 24, 2019
test_net.py fix bugs, update Readme Mar 29, 2019
test_net_global.py fix bugs, update Readme Mar 29, 2019
test_net_global_local.py fix bugs, update Readme Mar 29, 2019
test_net_local.py
test_net_so.py
trainval_net_global.py remove unnecessary statement Apr 6, 2019
trainval_net_global_local.py remove unnecessary statement Apr 6, 2019
trainval_net_local.py remove unnecessary statement Apr 6, 2019
trainval_net_so.py fix bugs, update Readme Mar 29, 2019

README.md

A Pytorch Implementation of Strong-Weak Distribution Alignment for Adaptive Object Detection (CVPR 2019)

Introduction

Follow faster-rcnn repository to setup the environment. When installing pytorch-faster-rcnn, you may encounter some issues. Many issues have been reported there to setup the environment. We used Pytorch 0.4.0 for this project. The different version of pytorch will cause some errors, which have to be handled based on each envirionment.

Data Preparation

  • PASCAL_VOC 07+12: Please follow the instructions in py-faster-rcnn to prepare VOC datasets.
  • Clipart, WaterColor: Dataset preparation instruction link Cross Domain Detection . Images translated by Cyclegan are available in the website.
  • Sim10k: Website Sim10k
  • Cityscape-Translated Sim10k: TBA
  • CitysScape, FoggyCityscape: Download website Cityscape, see dataset preparation code in DA-Faster RCNN

All codes are written to fit for the format of PASCAL_VOC. For example, the dataset Sim10k is stored as follows.

$ cd Sim10k/VOC2012/
$ ls
Annotations  ImageSets  JPEGImages
$ cat ImageSets/Main/val.txt
3384827.jpg
3384828.jpg
3384829.jpg
.
.
.

If you want to test the code on your own dataset, arange the dataset in the format of PASCAL, make dataset class in lib/datasets/. and add it to lib/datasets/factory.py, lib/datasets/config_dataset.py. Then, add the dataset option to lib/model/utils/parser_func.py.

Data Path

Write your dataset directories' paths in lib/datasets/config_dataset.py.

Pretrained Model

We used two models pre-trained on ImageNet in our experiments, VGG and ResNet101. You can download these two models from:

Download them and write the path in __C.VGG_PATH and __C.RESNET_PATH at lib/model/utils/config.py.

sample model

Global-local alignment model for watercolor dataset.

Train

  • Sample training script is in a folder, train_scripts.
  • With only local alignment loss,
 CUDA_VISIBLE_DEVICES=$GPU_ID python trainval_net_local.py \
                    --dataset source_dataset --dataset_t target_dataset --net vgg16 \
                    --cuda

Add --lc when using context-vector based regularization loss.

  • With only global alignment loss,
 CUDA_VISIBLE_DEVICES=$GPU_ID python trainval_net_global.py \
                    --dataset source_dataset --dataset_t target_dataset --net vgg16 \
                    --cuda

Add --gc when using context-vector based regularization loss.

  • With global and local alignment loss,
 CUDA_VISIBLE_DEVICES=$GPU_ID python trainval_net_global_local.py \
                    --dataset source_dataset --dataset_t target_dataset --net vgg16 \
                    --cuda

Add --lc and --gc when using context-vector based regularization loss.

Test

  • Sample test script is in a folder, test_scripts.
 CUDA_VISIBLE_DEVICES=$GPU_ID python test_net_global_local.py \
                    --dataset target_dataset --net vgg16 \
                    --cuda --lc --gc --load_name path_to_model

Citation

Please cite the following reference if you utilize this repository for your project.

@article{saito2018strong,
  title={Strong-Weak Distribution Alignment for Adaptive Object Detection},
  author={Saito, Kuniaki and Ushiku, Yoshitaka and Harada, Tatsuya and Saenko, Kate},
  journal={arXiv},
  year={2018}
}
You can’t perform that action at this time.