Skip to content

zhch19/SGA

Repository files navigation

Self-Guided Adaptation

Acknowledgment

The implementation is built on the pytorch implementation of faster-rcnn.pytorch, please refer to the original project to set up the environment.

Prerequisites

  • Python 2.7 or 3.6
  • Pytorch 0.4.0
  • CUDA 8.0 or higher

Data Preparation

All codes are written to fit for the Data format of Pascal VOC.

Pretrained Model

We used ResNet101 pretrained on the ImageNet in our experiments. You can download the model from:

Well-trained Domain Adaptation Object Detection Models

  • Cityscape to KITTI(Res101-based): GoogleDrive
  • KITTI to Cityscape(Res101-based): GoogleDrive
  • Cityscape to Foggycityscape(Res101-based): GoogleDrive
  • Pascal VOC to WaterColor(Res101-based): GoogleDrive
  • Daytime(Cityscape) to Night-time(Detrac-Night) (Res101-based): GoogleDrive

Train

  • Train SGA with Self-guided adversarial loss and hardness loss:
 CUDA_VISIBLE_DEVICES=$GPU_ID python trainval_net_auto.py \
                    --dataset source_dataset --dataset_t target_dataset --net res101 \
                    --cuda
  • Train SGA with all components(self-guided adversarial loss, hardness loss, self-guided progressive sampling)
 CUDA_VISIBLE_DEVICES=$GPU_ID python trainval_net_auto_self_pace.py \
                    --dataset source_dataset --dataset_t target_dataset --net res101 \
                    --cuda

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published