Skip to content

[CVPR 2023] CMT: Contrastive Mean Teacher for Domain Adaptive Object Detectors

License

Notifications You must be signed in to change notification settings

Shengcao-Cao/CMT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CMT: Contrastive Mean Teacher for Domain Adaptive Object Detectors

This is the official PyTorch implementation of our CVPR 2023 paper:

Contrastive Mean Teacher for Domain Adaptive Object Detectors

Shengcao Cao, Dhiraj Joshi, Liang-Yan Gui, Yu-Xiong Wang

CMT-pipeline

Overview

In this repository, we include the implementation of Contrastive Mean Teacher, integrated with both base methods Adaptive Teacher (AT, [code] [paper]) and Probabilistic Teacher (PT, [code] [paper]). Our code is based on the publicly available implementation of these two methods.

Environment and Dataset Setup

We follow AT and PT original instructions to set up the environment and datasets. The details are included in the README files.

Usage

Here is an example script for reproducing our results of AT + CMT on Cityscapes -> Foggy Cityscapes (all splits):

# enter the code directory for AT + CMT
cd CMT_AT

# activate AT environment
conda activate at

# add the last two lines to enable CMT
python train_net.py \
    --num-gpus 4 \
    --config configs/faster_rcnn_VGG_cross_city.yaml \
    OUTPUT_DIR save/city_atcmt \
    SEMISUPNET.CONTRASTIVE True \
    SEMISUPNET.CONTRASTIVE_LOSS_WEIGHT 0.05

Similarly, for PT + CMT on Cityscapes -> Foggy Cityscapes (all splits), run the following steps:

# enter the code directory for PT + CMT
cd CMT_PT

# activate AT environment
conda activate pt

# add the last two lines to enable CMT
python train_net.py \
    --num-gpus 4 \
    --config configs/pt/final_c2f.yaml \
    MODEL.ANCHOR_GENERATOR.NAME "DifferentiableAnchorGenerator" \
    UNSUPNET.EFL True \
    UNSUPNET.EFL_LAMBDA [0.5,0.5] \
    UNSUPNET.TAU [0.5,0.5] \
    OUTPUT_DIR save/city_ptcmt \
    UNSUPNET.CONTRASTIVE True \
    UNSUPNET.CONTRASTIVE_LOSS_WEIGHT 0.05
  • Other configuration options may be found in configs.
  • To resume the training, simply add --resume to the command.
  • To evaluate an existing model checkpoint, add --eval-only and specify MODEL.WEIGHTS path/to/your/weights.pth in the command.

Model Weights

Here we list the model weights for the results included in our paper:

Dataset Method mAP (AP50) Weights
Cityscapes -> Foggy Cityscapes (0.02 split) PT + CMT 43.8 link
Cityscapes -> Foggy Cityscapes (0.02 split) AT + CMT 50.3 link
Cityscapes -> Foggy Cityscapes (all splits) PT + CMT 49.3 link
Cityscapes -> Foggy Cityscapes (all splits) AT + CMT 51.9 link
KITTI -> Cityscapes PT + CMT 64.3 link
Pascal VOC -> Clipart1k AT + CMT 47.0 link

Additional Changes

In addition to integrating CMT with AT or PT, we have also made some necessary changes in their code:

  • Adaptive Teacher (AT)

    • For the VGG backbone, our code loads weights pre-trained on ImageNet. The weights are converted from Torchvision and can be downloaded here. Please put this file at checkpoints/vgg16_bn-6c64b313_converted.pth.
    • For Cityscapes and FoggyCityscapes datasets, our code creates a cache file for converted annotations when processing the dataset for the first time. Later experiments will directly load that cache file, which greatly accelerates the procedure of dataset building. You may also directly download the cache file here. Also, we disable the segmentation mask loading by default to further speed up preprocessing. Check adapteacher/data/datasets/cityscapes.py and adapteacher/data/datasets/cityscapes_foggy.py for more details.
    • We fix a bug regarding checkpoint loading: facebookresearch/adaptive_teacher#50
  • Probablistic Teacher (PT)

    • We include datasets for Foggy Cityscapes with the 0.02 split: VOC2007_foggytrain_0.02 and VOC2007_foggyval_0.02.
    • We change the resume_or_load() function in trainer.py so that it can correctly resume an interrupted training.

Citation

If you use our method, code, or results in your research, please consider citing our paper:

@inproceedings{cao2023contrastive,
  title={Contrastive Mean Teacher for Domain Adaptive Object Detectors},
  author={Cao, Shengcao and Joshi, Dhiraj and Gui, Liang-Yan and Wang, Yu-Xiong},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  pages={23839--23848},
  year={2023}
}

License

This project is released under the Apache 2.0 license. Other codes from open source repository follows the original distributive licenses.

About

[CVPR 2023] CMT: Contrastive Mean Teacher for Domain Adaptive Object Detectors

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages