Skip to content

SiamAtt: Siamese attention network for visual tracking

Notifications You must be signed in to change notification settings

yangkai12/SiamAtt

Repository files navigation

SiamAtt

This is an official implemention for “SiamAtt: Siamese attention network for visual tracking”. image

Dependencies

  • Python 3.7
  • PyTorch 1.0.0
  • numpy
  • CUDA 10
  • skimage
  • matplotlib

Prepare training dataset

Prepare training dataset, detailed preparations are listed in training_dataset directory.

Training:

CUDA_VISIBLE_DEVICES=0,1
python -m torch.distributed.launch \
    --nproc_per_node=2 \
    --master_port=2333 \
    ../../tools/train.py --cfg config.yaml

Testing:

python ../tools/test.py 

References

[1]SiamRPN++: Evolution of Siamese Visual Tracking with Very Deep Networks. Bo Li, Wei Wu, Qiang Wang, Fangyi Zhang, Junliang Xing, Junjie Yan. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.

Acknowledgment

Our SiamAtt tracker is based on PySot. We sincerely thank the authors Bo Li for providing these great works.

Citation

If you're using this code in a publication, please cite our paper.

@article{yang2020siamatt,
  title={SiamAtt: Siamese attention network for visual tracking},
  author={Yang, Kai and He, Zhenyu and Zhou, Zikun and Fan, Nana},
  journal={Knowledge-based systems},
  volume={203},
  pages={106079},
  year={2020},
  publisher={Elsevier}
}

About

SiamAtt: Siamese attention network for visual tracking

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages