Skip to content

NorahGreen/CiteTracker

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CiteTracker

The official implementation for the iccv 2023 paper CiteTracker: Correlating Image and Text for Visual Tracking.

[Models][Raw Results][Data]

Framework

Install the environment

Option1: Use the Anaconda (CUDA 10.2)

conda create -n citetrack python=3.8
conda activate citetrack
bash install.sh

Option2: Use the Anaconda (CUDA 11.3)

conda env create -f environment.yaml

Set project paths

Run the following command to set paths for this project

python tracking/create_default_local_file.py --workspace_dir . --data_dir ./data --save_dir ./output

After running this command, you can also modify paths by editing these two files

lib/train/admin/local.py  # paths about training
lib/test/evaluation/local.py  # paths about testing

Data Preparation

Put the tracking datasets in ./data. It should look like this:

${PROJECT_ROOT}
 -- data
     -- lasot
         |-- airplane
         |-- basketball
         |-- bear
         ...
     -- got10k
         |-- test
         |-- train
         |-- val
     -- coco
         |-- annotations
         |-- images
     -- trackingnet
         |-- TRAIN_0
         |-- TRAIN_1
         ...
         |-- TRAIN_11
         |-- TEST

Training

Download pre-trained MAE ViT-Base weights and put it under $PROJECT_ROOT$/pretrained_models (different pretrained models can also be used, see MAE for more details).

python tracking/train.py --script citetrack --config vitb_384_mae_ce_32x4_ep300 --save_dir ./output --mode multiple --nproc_per_node 4 --use_wandb 1

Replace --config with the desired model config under experiments/citetrack. We use wandb to record detailed training logs, in case you don't want to use wandb, set --use_wandb 0.

Evaluation

Download the model weights from Models

Put the downloaded weights on $PROJECT_ROOT$/output/checkpoints/train/citetrack

Change the corresponding values of lib/test/evaluation/local.py to the actual benchmark saving paths

Some testing examples:

  • LaSOT or other off-line evaluated benchmarks (modify --dataset correspondingly)
python tracking/test.py citetrack vitb_384_mae_ce_32x4_ep300 --dataset lasot --threads 16 --num_gpus 4
python tracking/analysis_results.py # need to modify tracker configs and names
  • GOT10K-test
python tracking/test.py citetrack vitb_384_mae_ce_32x4_got10k_ep100 --dataset got10k_test --threads 16 --num_gpus 4
python lib/test/utils/transform_got10k.py --tracker_name citetrack --cfg_name vitb_384_mae_ce_32x4_got10k_ep100
  • TrackingNet
python tracking/test.py citetrack vitb_384_mae_ce_32x4_ep300 --dataset trackingnet --threads 16 --num_gpus 4
python lib/test/utils/transform_trackingnet.py --tracker_name citetrack --cfg_name vitb_384_mae_ce_32x4_ep300

Acknowledgments

  • Thanks OSTrack and COCOOP libraries for helping us to quickly implement our ideas.
  • We use the implementation of the ViT from the Timm repo.

Citation

@inproceedings{citetracker,
  title={CiteTracker: Correlating Image and Text for Visual Tracking},
  author={Li, Xin and Huang, Yuqing and He, Zhenyu and Wang, Yaowei and Lu, Huchuan and Yang, Ming-Hsuan},
  booktitle={ICCV},
  year={2023}
}

About

[ICCV'23] CiteTracker: Correlating Image and Text for Visual Tracking

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published