The soure code of the paper "Material-Guided Multiview Fusion Network for Hyperspectral Object Tracking".
The environment configuration follows https://github.com/fzh0917/STMTrack.
Prepare Anaconda, CUDA and the corresponding toolkits. CUDA version required: 10.0+;
Create a new conda environment and activate it.
conda create -n MMFNet python=3.7 -y
conda activate MMFNet
Install pytorch and torchvision.
conda install pytorch==1.4.0 torchvision==0.5.0 cudatoolkit=10.0 -c pytorch
# pytorch v1.5.0, v1.6.0, or higher should also be OK.
Install other required packages.
pip install -r requirements.txt
- The hyperspectral videos datasets are from "https://www.hsitracking.com/".
- The Material View is generated by the code from the paper "Material Based Object Tracking in Hyperspectral Videos".
(a) Download pretrained model in
- https://pan.baidu.com/s/1vBmGFoQ4MRTUeLE3o7pteg
- Access code: 1234
(b) Change the path of training data in videoanalyst/evaluation/.
(c) Run: train.sh
(a) Download testing model in
- https://pan.baidu.com/s/15YdmJRvagPzKcUNWBloiHA
- Access code: 1234
(b) Put the testing model in snapshots/stmtrack-googlenet-got-train;
(c) Run: test.sh
@ARTICLE{10438474,
author={Li, Zhuanfeng and Xiong, Fengchao and Zhou, Jun and Lu, Jianfeng and Zhao, Zhuang and Qian, Yuntao},
journal={IEEE Transactions on Geoscience and Remote Sensing},
title={Material-Guided Multiview Fusion Network for Hyperspectral Object Tracking},
year={2024},
volume={62},
number={},
pages={1-15},
keywords={Feature extraction;Hyperspectral imaging;Target tracking;Videos;Object tracking;Visualization;Spatial resolution;Hyperspectral object tracking;hyperspectral unmixing;multihead attention;multiview fusion},
doi={10.1109/TGRS.2024.3366536}}
- lizhuanfeng@njust.edu.cn;
- If you have any questions, just contact me.