Skip to content
Implementation of "SCL: Towards Accurate Domain Adaptive Object Detection via Gradient Detach Based Stacked Complementary Losses"
Python C Cuda Other
Branch: master
Clone or download
Latest commit 8c2966d Nov 11, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
cfgs first commit Nov 5, 2019
lib adding nms_cuda.c Nov 5, 2019
test_scripts first commit Nov 5, 2019
train_scripts first commit Nov 5, 2019
README.md Update README.md Nov 11, 2019
_init_paths.py first commit Nov 5, 2019
demo_global.py first commit Nov 5, 2019
requirements.txt first commit Nov 5, 2019
test_net_SCL.py first commit Nov 5, 2019
test_net_dfrcnn.py first commit Nov 5, 2019
test_net_global_local.py first commit Nov 5, 2019
trainval_net_SCL.py first commit Nov 5, 2019
trainval_net_dfrcnn.py first commit Nov 5, 2019
trainval_net_global_local.py first commit Nov 5, 2019

README.md

Pytorch implementation of SCL-Domain-Adaptive-Object-Detection

Introduction

Please follow faster-rcnn repository to setup the environment. This code is based on the implemenatation of Strong-Weak Distribution Alignment for Adaptive Object Detection. We used Pytorch 0.4.0 for this project. The different version of pytorch will cause some errors, which have to be handled based on each envirionment.
For convenience, this repository contains implementation of:

  • SCL: Towards Accurate Domain Adaptive Object Detection via Gradient Detach Based Stacked Complementary Losses (link)
  • Strong-Weak Distribution Alignment for Adaptive Object Detection, CVPR'19 (link)
  • Domain Adaptive Faster R-CNN for Object Detection in the Wild, CVPR'18 (Our re-implementation) (link)

Data preparation

We have included the following set of datasets for our implementation:

It is important to note that we have written all the codes for Pascal VOC format. For example the dataset cityscape is stored as:

$ cd cityscape/VOC2012 
$ ls
Annotations  ImageSets  JPEGImages
$ cd ImageSets/Main
$ ls
train.txt val.txt trainval.txt test.txt

Note: If you want to use this code on your own dataset, please arrange the dataset in the format of PASCAL, make dataset class in lib/datasets/, and add it to lib/datasets/factory.py, lib/datasets/config_dataset.py. Then, add the dataset option to lib/model/utils/parser_func.py and lib/model/utils/parser_func_multi.py.

Data path

Write your dataset directories' paths in lib/datasets/config_dataset.py.

for example

__D.CLIPART = "./clipart"
__D.WATER = "./watercolor"
__D.SIM10K = "Sim10k/VOC2012"
__D.SIM10K_CYCLE = "Sim10k_cycle/VOC2012"
__D.CITYSCAPE_CAR = "./cityscape/VOC2007"
__D.CITYSCAPE = "../DA_Detection/cityscape/VOC2007"
__D.FOGGYCITY = "../DA_Detection/foggy/VOC2007"

__D.INIT_SUNNY = "./init_sunny"
__D.INIT_NIGHT = "./init_night"

Pre-trained model

We used two pre-trained models on ImageNet as backbone for our experiments, VGG16 and ResNet101. You can download these two models from:

To provide their path in the code check __C.VGG_PATH and __C.RESNET_PATH at lib/model/utils/config.py.

Our trained model
We are providing our models for foggycityscapes, watercolor and clipart.

  1. Adaptation form cityscapes to foggycityscapes:
  1. Adaptation from pascal voc to watercolor:
  1. Adaptation from pascal voc to clipart:

Train

We have provided sample training commands in train_scripts folder. However they are only for implementing our model.
I am providing commands for implementing all three models below. For SCL: Towards Accurate Domain Adaptive Object Detection via Gradient Detach Based Stacked Complementary Losses -:

CUDA_VISIBLE_DEVICES=$1 python trainval_net_SCL.py --cuda --net vgg16 --dataset cityscape --dataset_t foggy_cityscape --save_dir $2

For Domain Adaptive Faster R-CNN for Object Detection in the Wild -:

CUDA_VISIBLE_DEVICES=$1 python trainval_net_dfrcnn.py --cuda --net vgg16 --dataset cityscape --dataset_t foggy_cityscape --save_dir $2

For Strong-Weak Distribution Alignment for Adaptive Object Detection -:

CUDA_VISIBLE_DEVICES=$1 python trainval_net_global_local.py --cuda --net vgg16 --dataset cityscape --dataset_t foggy_cityscape --gc --lc --save_dir $2

Test

We have provided sample testing commands in test_scripts folder for our model. For others please take a reference of above training scripts.

Citation

If you use our code or find this helps your research, please cite:

@article{shen2019SCL,
  title={SCL: Towards Accurate Domain Adaptive Object Detection via
Gradient Detach Based Stacked Complementary Losses},
  author={Zhiqiang Shen and Harsh Maheshwari and Weichen Yao and Marios Savvides},
  journal={arXiv preprint arXiv:1911.02559},
  year={2019}
}

Examples

Figure 1: Detection Results from Pascal VOC to Clipart.
Figure 2: Detection Results from Pascal VOC to Watercolor.
You can’t perform that action at this time.