Skip to content

zhang-pengyu/Multimodal-Tracking-Survey

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

70 Commits
 
 
 
 
 
 

Repository files navigation

Multimodal Tracking Survey

A comprehensive survey on multimodal tracking [Paper], including RGB-T and RGB-D tracking methods. This list will be long-term updating. If your related paper is missing in this review, feel free to contact pyzhang@mail.dlut.edu.cn. alt text

Citation

If our paper and repositority are helpful for your work, please cite us,

@article{Zhang_Arxiv20_MM_tracking_survey,
author = {Pengyu Zhang and Dong Wang and Huchuan Lu},
title = {Multi-modal Visual Tracking: Review and Experimental Comparison},
journal={arXiv preprint arXiv:2012.04176},
year={2020}
}

Multimodal Tracking List

RGB-D tracking

2022

  • DMTracker: Shang Gao, Jinyu Yang, Zhe Li, Feng Zheng, Ales leonardis, Jingkuan Song. Learning Dual-Fused Modality-Aware Representations for RGBD Tracking. In ECCV workshop, 2022. [Paper]

2020

  • WCO: Weichun Liu, Xiaoan Tang, Chengling Zhao. Robust RGBD Tracking via Weighted Convlution Operators. In Sensors 20(8), 2020. [Paper]

2019

  • 3DMS: Alexander Gutev, Carl James Debono. Exploiting Depth Information to Increase Object Tracking Robustness. In ICST 2019. [Paper]
  • OTR: Ugur Kart, Alan Lukezic, Matej Kristan, Joni-Kristian Kamarainen, Jiri Matas. Object Tracking by Reconstruction with View-Specific Discriminative Correlation Filters. In CVPR 2019. [Paper] [Code]
  • TACF: Yangliu Kuai, Gongjian Wen, Dongdong Li, Jingjing Xiao. Target-Aware Correlation Filter Tracking in RGBD Videos. In Sensors 19(20), 2019. [Paper]
  • CA3DMS: Ye Liu, Xiao-Yuan Jing, Jianhui Nie, Hao Gao, Jun Liu, Guo-Ping Jiang. Context-Aware Three-Dimensional Mean-Shift With Occlusion Handling for Robust Object Tracking in RGB-D Videos. In TMM 21(3), 2019. [Paper]
  • OTOD: Yujun Xie, Yao Lu, Shuang Gu. RGB-D Object Tracking with Occlusion Detection. In CIS 2019. [Paper]

2018

  • CSR-rgbd: Uğur Kart, Joni-Kristian Kämäräinen, Jiří Matas. How to Make an RGBD Tracker? In ECCV Workshop 2018. [Paper] [Code]
  • DMDCF: Uğur Kart, Joni-Kristian Kämäräinen, Jiří Matas, Lixin Fan, Francesco Cricri. Depth Masked Discriminative Correlation Filter. In ICPR 2018. [Paper]
  • SEOH: Jiaxu Leng, Ying Liu. Real-Time RGB-D Visual Tracking With ScaleEstimation and Occlusion Handling. In Access (6), 2018. [Paper]
  • ARDM: Jingjing Xiao, Rustam Stolkin, Yuqing Gao, Aleš Leonardis. Robust Fusion of Color and Depth Data for RGB-D Target Tracking Using Adaptive Range-Invariant Depth Models and Spatio-Temporal Consistency Constraints. In TC 48(8) 2018. [Paper] [Code]
  • OACPF: Yayu Zhai, Ping Song, Zonglei Mou, Xiaoxiao Chen, Xiongjun Liu. Occlusion-Aware Correlation Particle FilterTarget Tracking Based on RGBD Data. In Access (6), 2018. [Paper]
  • CCF: Guanqun Li, Lei Huang, Peichang Zhang, Qiang Li, YongKai Huo. Depth Information Aided Constrained correlation Filter for Visual Tracking. In GSKI 2018. [Paper]
  • RTKCF: Han Zhang, Meng Cai, Jianxun Li. A Real-time RGB-D tracker based on KCF. In CCDC 2018. [Paper]

2017

  • ROTSL: Zi-ang Ma, Zhi-yu Xiang. Robust Object Tracking with RGBD-based Sparse Learning. In ITEE (18), 2017. [Paper]

2016

  • DLST: Ning An, Xiao-Guang Zhao, Zeng-Guang Hou. Online RGB-D Tracking via Detection-Learning-Segmentation. In ICPR 2016. [Paper]
  • DSKCF: Sion Hannuna, Massimo Camplani, Jake Hall, Majid Mirmehdi, Dima Damen, Tilo Burghardt, Adeline Paiement, Lili Tao. DS-KCF: A Real-time Tracker for RGB-D Data. In RTIP (16), 2016. [Paper] [Code]
  • 3DT: Adel Bibi, Tianzhu Zhang, Bernard Ghanem. 3D Part-Based Sparse Tracker with Automatic Synchronization and Registration. In CVPR 2016. [Paper] [Code]
  • OAPF: Kourosh Meshgia, Shin-ichi Maedaa, Shigeyuki Obaa, Henrik Skibbea, Yu-zhe Lia, Shin Ishii. Occlusion Aware Particle Filter Tracker to Handle Complex and Persistent Occlusions. In CVIU (150), 2016. [Paper]

2015

  • ISOD: Yan Chen, Yingju Shen, Xin Liu, Bineng Zhong. 3D Object Tracking via Image Sets and Depth-Based Occlusion Detection. In SP (112), 2015. [Paper]
  • DSOH: Massimo Camplani, Sion Hannuna, Majid Mirmehdi, Dima Damen, Adeline Paiement, Lili Tao, Tilo Burghardt. Real-time RGB-D Tracking with Depth Scaling Kernelised Correlation Filters and Occlusion Handling. In BMVC, 2015. [Paper] [Code]
  • DOHR: Ping Ding, Yan Song. Robust Object Tracking Using Color and Depth Images with a Depth Based Occlusion Handling and Recovery. In FSKD, 2015. [Paper]
  • CDG: Huizhang Shi, Changxin Gao, Nong Sang. Using Consistency of Depth Gradient to Improve Visual Tracking in RGB-D sequences. In CAC, 2015. [Paper]
  • OL3DC: Bineng Zhong, Yingju Shen, Yan Chen, Weibo Xie, Zhen Cui, Hongbo Zhang, Duansheng Chen ,Tian Wang, Xin Liu, Shujuan Peng, Jin Gou, Jixiang Du, Jing Wang, Wenming Zheng. Online Learning 3D Context for Robust Visual Tracking. In Neurocomputing (151), 2015. [Paper]

2014

  • MCBT: Qi Wang, Jianwu Fang, Yuan Yuan. Multi-Cue Based Tracking. In Neurocomputing (131), 2014. [Paper]

2012

  • AMCT: Germán Martín García, Dominik Alexander Klein, Jörg Stückler, Simone Frintrop, Armin B. Cremers. Adaptive Multi-cue 3D Tracking of Arbitrary Objects. In JDOS, 2012. [Paper]

Datasets

  • PTB: Shuran Song, Jianxiong Xiao. Tracking Revisited using RGBD Camera: Unified Benchmark and Baselines. In ICCV, 2013. [Paper] [Project] [Dataset]
  • STC: Jingjing Xiao, Rustam Stolkin, Yuqing Gao, Aleš Leonardis. Robust Fusion of Color and Depth Data for RGB-D Target Tracking Using Adaptive Range-Invariant Depth Models and Spatio-Temporal Consistency Constraints. In TC 48(8) 2018. [Paper] [Dataset]
  • CDTB:Alan Lukezic, Ugur Kart, Jani Kapyla, Ahmed Durmush, Joni-Kristian Kamarainen, Jiri Matas, Matej Kristan. CDTB: A Color and Depth Visual Object Tracking Dataset and Benchmark. In ICCV, 2019. [Paper] [Project] [Dataset]
  • DepthTrack: Song Yan, Jinyu Yang, Jani Käpylä, Feng Zheng, Aleš Leonardis, Joni-Kristian Kämäräinen. DepthTrack : Unveiling the Power of RGBD Tracking. In ArXiv, 2021. [Paper]
  • RGBD1k: Xue-Feng Zhu, Tianyang Xu, Zhangyong Tang, Zucheng Wu, Haodong Liu, Xiao Yang, Xiao-Jun Wu, Josef Kittler. RGBD1K:A Large-scale Dataset and Benchmark for RGB-D Object Tracking. In ArXiv, 2022. [Paper]
  • D2Cube: Jinyu Yang, Shang Gao, Zhe Li, Feng Zheng, Ales Leonardis. Resource-Efficient RGBD Aerial Tracking. In CVPR 2023. [Paper] [Project]

RGB-T Tracking

2023

  • CIC: Xingchen Zhang, Yiannis Demiris. Self-Supervised RGB-T Tracking with Cross-Input Consistency In ArXiv 2023.[paper]
  • MACFT: Yang Luo, Xiqing Guo, Mingtao Dong, Jin Yu. RGB-T Tracking Based on Mixed Attention. In ArXiv 2023. [paper]
  • TBSI: Tianrui Hui, Zizheng Xun, Fengguang Peng, Junshi Huang, Xiaoming Wei, Xiaolin Wei, Jiao Dai, Jizhong Han, Si Liu. Bridging Search Region Interaction with Template for RGB-T Tracking. In CVPR 2023. [paper]
  • CMD Tianlu Zhang, Hongyuan Guo, Qiang Jiao, Qiang Zhang, Jungong Han. Efficient RGB-T Tracking via Cross-Modality Distillation. In CVPR 2023. [paper]

2022

  • LRMWT: Mengzheng Feng, Jianbo Su. Learning reliable modal weight with transformer for robust RGBT tracking. In KBS, 2022. [paper]
  • APFNet: Yun Xiao, Mengmeng Yang, Chenglong Li, Lei Liu, Jin Tang. Attribute-Based Progressive Fusion Network for RGBT Tracking. In AAAI 2022. [Paper] [code]
  • HMFT: Pengyu Zhang, Jie Zhao, Dong Wang, Huchuan Lu, Xiang Ruan. Visible-Thermal UAV Tracking: A Large-Scale Benchmark and New Baseline. In CVPR 2022. [Paper] [code]

2021

  • DFNet: Jingchao Peng, Haitao Zhao and Zhengwei Hu. Dynamic Fusion Network for RGBT Tracking. In ArXiv, 2021. [Paper]
  • ADRNet: Pengyu Zhang, DongWang, Huchuan Lu and Xiaoyun Yang. Learning Adaptive Attribute-Driven Representation for Real-Time RGB-T Tracking. In IJCV, 2021. [Paper] [Code]
  • SiamCDA: Tianlu Zhang, Xueru Liu, Qiang Zhang and Jungong Han. SiamCDA: Complementarity-and distractor-aware RGB-T tracking based on Siamese network. In TCSVT, 2021. [Paper]
  • SiamIVFN: Jingchao Peng, Haitao Zhao, Zhengwei Hu, Yi Zhuang and Bofan Wang. Siamese Infrared and Visible light fusion network for RGB-T Tracking. In ArXiv, 2021. [Paper]
  • JMMAC: Pengyu Zhang, Jie Zhao, Chunjuan Bo, Dong Wang, Huchuan Lu, Xiaoyun Yang. Jointly Modeling Motion and Appearance Cues for Robust RGB-T Tracking. In TIP(30), 2021. [Paper] [Code]

2020

  • MFGNet: Xiao Wang, Xiujun Shu, Shiliang Zhang, Bo Jiang, Yaowei Wang, Yonghong Tian, Feng Wu. Dynamic Modality-Aware Filter Generation for RGB-T Tracking. In ArXiv 2020.[Project]
  • DMCNet: Andong Lu, Cun Qian, Chenglong Li, Jin Tang, and Liang Wang. Duality-Gated Mutual Condition Network forRGBT Tracking. In ArXiv, 2020. [Paper]
  • CAT: Chenglong Li, Lei Liu, Andong Lu, Qing Ji, Jin Tang. Challenge-aware RGBT tracking. In ECCV, 2020. [Paper]
  • CMPP: Chaoqun Wang, Chunyan Xu, Zhen Cui, Ling Zhou, Tong Zhang, Xiaoya Zhang, Jian Yang. Cross-modal pattern-propagation for RGB-T tracking. In CVPR, 2020. [Paper]
  • MaCNet: Hui Zhang, Lei Zhang, Li Zhuo, Jing Zhang. Object Tracking in RGB-T Videos Using Modal-Aware Attention Network and Competitive Learning. In Sensors 20(2), 2020. [Paper]

2019

  • DAPNet:Yabin Zhu, Chenglong Li, Bin Luo, Jin Tang, Xiao Wang. Dense Feature Aggregation and Pruning for RGBT Tracking. In ACM MM, 2019. [Paper]
  • HTF: Chengwei Luo, Bin Sun, Ke Yang, Taoran Lu, Wei-Chang Yeh. Thermal Infrared and Visible Sequences Fusion Tracking based on a Hybrid Tracking Framework with Adaptive Weighting Scheme. In IPT (99), 2019. [Paper]
  • LMCFT: Xiangyuan Lan, Mang Ye, Rui Shao, Bineng Zhong, Pong C. Yuen, Huiyu Zhou. Learning Modality-Consistency Feature Templates: A Robust RGB-Infrared Tracking System. In TIE 66(12), 2019. [Paper]
  • MANet: Chenglong Li, Andong Lu, Aihua Zheng, Zhengzheng Tu, Jin Tang. Multi-Adapter RGBT Tracking. In ICCV Workshop, 2019. [Paper]
  • TODA: Rui Yang, Yabin Zhu, Xiao Wang, Chenglong Li, Jin Tang. Learning Target-Oriented Dual Attention for Robust RGB-T Tracking. In ICIP, 2019.[Paper]
  • DAFNet: Yuan Gao, Chenglong Li, Yabin Zhu, Jin Tang, Tao He, Futian Wang. Deep Adaptive Fusion Network for High Performance RGBT Tracking. In ICCV Workshop 2019. [Paper]
  • DiMP-RGBT: Lichao Zhang, Martin Danelljan, Abel Gonzalez-Garcia, Joost van de Weijer, Fahad Shahbaz Khan. Multi-Modal Fusion for End-to-End RGB-T Tracking. In ICCV Workshop, 2019. [Paper]
  • ONMF: Xiangyuan Lan, Mang Ye, Rui Shao, Bineng Zhong, Deepak Kumar Jain, Huiyu Zhou. Online Non-Negative Multi-Modality Feature Template Learning for RGB-Assisted Infrared Tracking. In Access (7), 2019. [Paper]
  • CMCF: Sulan Zhai, Pengpeng Shao, Xinyan Liang, Xin Wang. Fast RGB-T Tracking via Cross-Modal Correlation Filters. In Neurocomputing (334), 2019. [Paper]

2018

  • RCDL: Xiangyuan Lan, Mang Ye, Shengping Zhang, Pong C. Yuen. Robust Collaborative Discriminative Learning for RGB-Infrared Tracking. In AAAI, 2018. [Paper]
  • MSR: Xiangyuan Lan, Mang Ye, Shengping Zhang, Huiyu Zhou, Pong C. Yuen. Modality-correlation-aware sparse representation for RGB-infrared object tracking. In PRL(130), 2018.
  • CMR: Chenglong Li, Chengli Zhu, Yan Huang, Jin Tang, Liang Wang. Cross-Modal Ranking with Soft Consistency and Noisy Labels for Robust RGB-T Tracking. In ECCV, 2018. [Paper]
  • RMR: Chenglong Li, Chengli Zhu, Shaofei Zheng, Bin Luo, Jing Tang. Two-Stage Modality-Graphs Regularized Manifold Ranking for RGB-T Tracking. In SPIC (68), 2018. [Paper]
  • LGMG:Chenglong Li, Chengli Zhu, Jian Zhang, Bin Luo, Xiaohao Wu, Jin Tang. Learning Local-Global Multi-Graph Descriptors for RGB-T Object Tracking. In TCSVT 29(10), 2018. [Paper]
  • MDNet-RGBT: Xingming Zhang, Xuehan Zhang, Xuedan Du, Xiangming Zhou, Jun Yin. Learning Multi-domain Convolutional Network for RGB-T Visual Tracking. In CISP, 2018. [Paper]
  • FTSNet: Chenglong Li, Xiaohao Wu, Nan Zhao, Xiaochun Cao, Jin Tang. Fusing Two-Stream Convolutional Neural Networks for RGB-T Object Tracking. In Neurocomputing (281), 2018. [Paper]
  • CSCF: Yulong Wang, Chenglong Li, Jin Tang, and Dengdi Sun. Learning Collaborative Sparse Correlation Filter for Real-Time Multispectral Object Tracking. In BICS, 2018. [Paper]

2017

  • SGT: Chenglong Li, Nan Zhao, Yijuan Lu, Chengli Zhu, Jin Tang. Weighted Sparse Representation Regularized Graph Learning for RGB-T Object Tracking. In ACM MM, 2017. [Paper]
  • MLSR: Chenglong Li, Xiang Sun, Xiao Wang, Lei Zhang, Jin Tang. Grayscale-Thermal Object Tracking via Multitask Laplacian Sparse Representation. In TSMCS 47(4), 2017. [Paper]

2016

  • RT-LSR: Chenglong Li, Shiyi Hu, Sihan Gao, Jin Tang. Real-Time Grayscale-Thermal Tracking via Laplacian Sparse Representation. In MMM, 2016. [Paper]
  • CSR: Chenglong Li, Hui Cheng, Shiyi Hu, Xiaobai Liu, Jin Tang, Liang Lin. Learning Collaborative Sparse Representation for Grayscale-Thermal Tracking. In TIP 25(12), 2016.[Paper]

2012

  • JSR: Huaping Liu, Fuchun Sun. Fusion Tracking in Color and Infrared Images using Joint Sparse Representation. INIS 55(3), 2012. [Paper]

2011

  • L1-PF: Yi Wu, Erik Blasch, Genshe Chen, Li Bai, Haibin Ling. Multiple Source Data Fusion via Sparse Representation for Robust Visual Tracking. IN ICIF, 2011. [Paper]

2008

  • PGM: Siyue Chen, Wenjie Zhu, Henry Leung. Thermo-Visual Video Fusion Using Probabilistic Graphical Model for Human Tracking. In ISCS, 2008. [Paper]
  • MST: Ciarán Ó Conaire, Noel E. O’Connor, Alan Smeaton. Thermo-Visual Feature Fusion for Object Tracking using Multiple Spatiogram Trackers. In MVA (19), 2008. [Paper]

2007

  • PLF: N. Cvejic, S. G. Nikolov, H. D. Knowles, A. Łoza, A. Achim, D. R. Bull, C. N. Canagarajah. The Effect of Pixel-Level Fusion on Object Tracking in Multi-Sensor Surveillance Video. In CVPR, 2007. [Paper]

2006

  • CFM:C. 0 Conaire, N. E. O'Connor, E. Cooke, A. F. Smeaton. Comparison of Fusion Methods for Thermo-Visual Surveillance Tracking. In ICIF 2006. [Paper]

Dataset

  • OTCBVS: James W. Davis, Vinay Sharma. Background-Subtraction using Contour-based Fusion of Thermal and Visible Imagery. In CVIU (106), 2007. [Paper] [Project] [Dataset]
  • LITIV: Atousa Torabi, Guillaume Massé, Guillaume-Alexandre Bilodeau. An Iterative Integrated Framework for Thermal–Visible Image Registration, Sensor Fusion, and People Tracking for Video Surveillance applications. In CVIU (116), 2012. [Paper] [Project] [Code]
  • GTOT: Chenglong Li, Xiang Sun, Xiao Wang, Lei Zhang, Jin Tang. Grayscale-Thermal Object Tracking via Multitask Laplacian Sparse Representation. In TIP 47(4), 2016. [Paper] [Dataset]
  • RGBT210: Chenglong Li, Nan Zhao, Yijuan Lu, Chengli Zhu, Jin Tang. Weighted Sparse Representation Regularized Graph Learning for RGB-T Object Tracking. In ACM MM, 2017. [Paper]
  • RGBT-234: Chenglong Li, Xinyan Liang, Yijuan Lu, Nan Zhao, Jin Tang. RGB-T Object Tracking: Benchmark and Baseline. In PR (96), 2019. [Paper] [Project] [Dataset]
  • VOT-RGBT: Matej Kristan, Jiri Matas, Ales Leonardis et al. The Seventh Visual Object Tracking VOT2019 Challenge Results. In ICCV Workshop, 2019. [Paper] [Project] [Dataset]
  • LSS Dataset Tianlu Zhang, Xueru Liu, Qiang Zhang, Jungong Han. SiamCDA: Complementarity-and distractor-aware RGB-T tracking based on Siamese network [Project]
  • LasHeR: Chenglong Li, Wanlin Xue, Yaqing Jia, Zhichen Qu, Bin Luo, and Jin Tang. LasHeR: A Large-scale High-diversity Benchmark for RGBT Tracking. In ArXiV. [Paper] [Project]
  • VTUAV: Pengyu Zhang, Jie Zhao, Dong Wang, Huchuan Lu, Xiang Ruan. Visible-Thermal UAV Tracking: A Large-Scale Benchmark and New Baseline. In CVPR 2022. [Paper] [Project]

About

A comprehensive survey on multimodal tracking, including RGB-T and RGB-D tracking methods.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published