Junjie Ye, Changhong Fu, Guangze Zheng, Danda Pani Paudel, and Guang Chen. Unsupervised Domain Adaptation for Nighttime Aerial Tracking. In CVPR, pages 1-10, 2022.
UDAT is an unsupervised domain adaptation framework for visual object tracking. This repo contains its Python implementation.
Before training, we need to preprocess the unlabelled training data to generate training pairs.
-
Download the proposed NAT2021-train set
-
Customize the directory of the train set in
lowlight_enhancement.py
and enhance the nighttime sequencescd preprocessing/ python lowlight_enhancement.py # enhanced sequences will be saved at '/YOUR/PATH/NAT2021/train/data_seq_enhanced/'
-
Download the video saliency detection model here and place it at
preprocessing/models/checkpoints/
. -
Predict salient objects and obtain candidate boxes
python inference.py # candidate boxes will be saved at 'coarse_boxes/' as .npy
-
Generate pseudo annotations from candidate boxes using dynamic programming
python gen_seq_bboxes.py # pseudo box sequences will be saved at 'pseudo_anno/'
-
Generate cropped training patches and a JSON file for training
python par_crop.py python gen_json.py
Take UDAT-CAR for instance.
-
Apart from above target domain dataset NAT2021, you need to download and prepare source domain datasets VID and GOT-10K.
-
Download the pre-trained daytime model (SiamCAR/SiamBAN) and place it at
UDAT/tools/snapshot
. -
Start training
cd UDAT/CAR export PYTHONPATH=$PWD python tools/train.py
Take UDAT-CAR for instance.
-
For quick test, you can download our trained model for UDAT-CAR (or UDAT-BAN) and place it at
UDAT/CAR/experiments/udatcar_r50_l234
. -
Start testing
python tools/test.py --dataset NAT
- Start evaluating
python tools/eval.py --dataset NAT
@Inproceedings{Ye2022CVPR,
title={{Unsupervised Domain Adaptation for Nighttime Aerial Tracking}},
author={Ye, Junjie and Fu, Changhong and Zheng, Guangze and Paudel, Danda Pani and Chen, Guang},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2022},
pages={1-10}
}
We sincerely thank the contribution of following repos: SiamCAR, SiamBAN, DCFNet, DCE, and USOT.
If you have any questions, please contact Junjie Ye at ye.jun.jie@tongji.edu.cn or Changhong Fu at changhongfu@tongji.edu.cn.