Skip to content
/ SENSE Public

[INF FUS 2024] SENSE: Hyperspectral Video Object Tracker via Fusing Material and Motion Cues

Notifications You must be signed in to change notification settings

YZCU/SENSE

Repository files navigation

SENSE

📖Paper

PyTorch codes for "SENSE: Hyperspectral Video Object Tracker via Fusing Material and Motion Cues".


🏃Keep updating🏃: More detailed tracking results for SENSE have been released.


Abstracts

Hyperspectral video offers a wealth of material and motion cues about objects. This advantage proves invaluable in addressing the inherent limitations of generic RGB video tracking in complex scenarios such as illumination variation, background clutter, and fast motion. However, existing hyperspectral tracking methods often prioritize the material cue of objects while overlooking the motion cue contained in sequential frames, resulting in unsatisfactory tracking performance, especially in partial or full occlusion. To this end, this article proposes a novel hyperspectral video object tracker via fusing material and motion cues called SENSE that leverages both material and motion cues for hyperspectral object tracking. First, to fully exploit the material cue, we propose a spectral-spatial self-expression (SSSE) module that adaptively converts the hyperspectral image into complementary false modalities while effectively bridging the band gap. Second, we propose a cross-false modality fusion (CFMF) module that aggregates and enhances the differential-common material features derived from false modalities to arouse material awareness for robust object representations. Furthermore, a motion awareness (MA) module is designed, which consists of an awareness selector to determine the reliability of each cue and a motion prediction scheme to handle abnormal states. Extensive experiments are conducted to demonstrate the effectiveness of the proposed method over state-of-the-arts. The code will be available at https://github.com/YZCU/SENSE.

Install

git clone https://github.com/YZCU/SENSE.git

Environment

  • CUDA 11.8
  • Python 3.9.18
  • PyTorch 2.0.0
  • Torchvision 0.15.0
  • numpy 1.25.0

Prepare training and test datasets

  • RGB training datasets:
  • Hyperspectral training and test datasets:

🖼Results

  • Comparison with SOTA RGB trackers

image

  • Comparison with SOTA hyperspectral trackers. (a) Precision plot. (b) Success plot

image

  • Visual comparison

image image image image

Citation

If you find our work helpful in your research, please consider citing it. We appreciate your support!

@article{CHEN2024102395,
title = {SENSE: Hyperspectral Video Object Tracker via Fusing Material and Motion Cues},
journal = {Information Fusion},
pages = {102395},
year = {2024},
issn = {1566-2535},
doi = {https://doi.org/10.1016/j.inffus.2024.102395},
url = {https://www.sciencedirect.com/science/article/pii/S1566253524001738},
author = {Yuzeng Chen and Qiangqiang Yuan and Yuqi Tang and Yi Xiao and Jiang He and Zhenqi Liu},
keywords = {Hyperspectral, Object tracking, Self-expression, False modality fusion, Motion awareness}
}

Acknowledgement

We would like to express our sincere gratitude to the excellent projects, including SiamCAR, SiamBAN, JMMAC, DF, MMF-Net, Siam-HYPER, SEE-Net, SiamBAG, Trans-DAT, TSCFW, AD-SiamRPN, OTB. These great arts inspire the present work.

About

[INF FUS 2024] SENSE: Hyperspectral Video Object Tracker via Fusing Material and Motion Cues

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published