z
Semi-Supervised Domain Generalization for Object Detection via Language-Guided Feature Alignment (CDDMSL)
Sina Malakouti and Adriana Kovashka
This is the official repo for Semi-Supervised Domain Generalization for Object Detection via Language-Guided Feature Alignment (BMVC2023)
Please contact Sina Malakouti at sem238(at)pitt(dot)edu or siinamalakouti(at)gmail(dot)com for any questions or more information.
arXiv | Official BMVC Proceeding | Video | Poster | Supplement | BMVC Project Page
This repo will be updated soon!
For installing the project, please see RegionCLIP and Detectron2.
For this task, we used PASCAL-VOC as a labeled domain. Then, either Clipart, Comic, or Watercolor is used as the unlabeled domain. For instance, if Pascal-VOC and Clipart are used as labeled and unlabeled source domains. Then, Comics and Watercolor are the target domains in the DG experiment.
Please see here for downloading the dataset.
Please see the following files for dataset creation and/or modification:
- detectron2/data/datasets/pascal_voc.py
- detectron2/data/datasets/builtin.py
Please download cityscapes and foggy-cityscapes as well as the bdd100k. Note that for bdd100k, we only used the validation set.
Please see the following files for dataset creation and/or modification:
- detectron2/data/datasets/cityscapes.py
- detectron2/data/datasets/builtin.py. For bdd100k, we used coco to register the data
Please download pre-trained parameters from Google Drive. Will be updated soon to cover all parameters)
You can find checkpoints required for both training and evaluation in the google drive. Some of the available parameters are:
- RegionCLIP pretrained parameters
- Text Embedding (VOC)
- Text Embedding (Cityscapes)
- Vision-to-Language Transformer
- Real-to-Artistic Parameters
- Adverse-Weather parameters
- Example for training a real-to-artistic generalization is available in faster_rcnn_voc.sh
- Example of training an adverse-weather generalization is available in faster_rcnn_city.sh
During training, we evaluate all source and target domains. However, for inference only, please set the weights of the modules and add the flag --eval-only in the bash file.
We have provided the pre-trained parameters for the ClipCap mapping-network here. If you wish to do the pre-training, please follow the following steps:
- Follow the instructions on ClipCap to install the project and download the coco dataset.
- Include the RegionCLIP2CLIP.py in the ClipCap repository.
- Replace the parse_coco.py provided here with the one in the main reposotiry. The only difference is that we need to rename some of the parameters in the RegionClip encoder so that the naming format matches the CLIP's naming to successfully train the mapping network.
- Then execute the following commands:
python parse_coco.py --clip_model_type RN50
python train.py --only_prefix --data ./data/coco/oscar_split_RN50_train.pkl --out_dir ./coco_train/ --mapping_type transformer --num_layres 8 --prefix_length 40 --prefix_length_clip 40 --is_rn
- For training/inference of the RegionCLIP pre-trained model, please refer to here.
If you find this repo useful, please consider citing our paper:
@article{malakouti2023semi,
title={Semi-Supervised Domain Generalization for Object Detection via Language-Guided Feature Alignment},
author={Malakouti, Sina and Kovashka, Adriana},
journal={arXiv preprint arXiv:2309.13525},
year={2023}
}
This repo is based on Detectron2 and RegionCLIP repositories.