Skip to content

ZHU-Zhiyu/High-Rank_RGB-Event_Tracker

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

28 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Issues MIT License

PWC PWC


Logo

Cross-modal Orthogonal High-rank Augmentation for RGB-Event Transformer-trackers

[ICCV 2023]

Paper · Report Bug · Request Feature


Logo Logo Logo
Demos
Table of Contents
  1. Getting Started
  2. License
  3. Contact
  4. Acknowledgments

Getting Started

Prerequisites

  1. clone the project
    git clone https://github.com/ZHU-Zhiyu/High-Rank_RGB-Event_Tracker.git
  2. FE108

   * Download data from FE108

   * Transfer and clip data into h5py format sh python ./Utils/Evt_convert.py The directory should have the below format:

Format of FE108 (click to expand)

```Shell
├── FE108 dataset (108 sequences)
    ├── airplane 
        ├── inter3_stack
            ├── 0001_1.jpg
            ├── 0001_2.jpg
            ├── 0001_3.jpg
            ├── 0002_1.jpg
            ├── ...
        ├── img
            ├── 0001.jpg
            ├── 0002.jpg
            ├── ...
        ├── events.aedat4
        ├── groundtruth_rect.txt
    ├── airplane_motion
        ├── ... 
    ├── ... 
    ├── Event file(108 sequences)
        ├── airplane.h5
        ├── airplane_motion.h5
        ├── ... 

```
</details>
  1. COESOT    * Download data from COESOT    * Transfer and clip data into mat files

    python ./COESOT/data.py

    The directory should have the below format:

    Format of COESOT (click to expand)
    ├── COESOT dataset
        ├── Training Subset (827 sequences)
            ├── dvSave-2021_09_01_06_59_10
                ├── dvSave-2021_09_01_06_59_10.aedat4
                ├── groundtruth.txt
                ├── absent.txt
                ├── start_end_index.txt
            ├── ... 
        ├── trainning voxel (827 sequences)
            ├── dvSave-2022_03_21_09_05_49
              ├── dvSave-2022_03_21_09_05_49_voxel
                ├── frame0000.mat
                ├── frame0001.mat
                ├── ... 
            ├── ... 
        ├── Testing Subset (528 sequences)
            ├── dvSave-2021_07_30_11_04_12
                ├── dvSave-2021_07_30_11_04_12_aps
                ├── dvSave-2021_07_30_11_04_12_dvs
                ├── dvSave-2021_07_30_11_04_12.aedat4
                ├── groundtruth.txt
                ├── absent.txt
                ├── start_end_index.txt
            ├── ... 
        ├── testing voxel (528 sequences)
            ├── dvSave-2022_03_21_11_12_27
              ├── dvSave-2022_03_21_11_12_27_voxel
                ├── frame0000.mat
                ├── frame0001.mat
                ├── ... 
            ├── ... 

Installation

  1. One-stream tracker: CEUTrack

    conda create -n CEUTrack python==3.8
    conda activate CEUTrack
    cd ./CEUTrack
    sh install.sh
  2. Two-streams tracker: MonTrack

     conda create -n montrack python==3.8
     conda activate montrack
     cd ./MonTrack
     conda install -c pytorch pytorch=1.5 torchvision=0.6.1 cudatoolkit=10.2
     conda install matplotlib pandas tqdm
     pip install opencv-python tb-nightly visdom scikit-image tikzplotlib gdown
     conda install cython scipy
     sudo apt-get install libturbojpeg
     pip install pycocotools jpeg4py
     pip install wget yacs
     pip install shapely==1.6.4.post2
     python -c "from pytracking.evaluation.environment import create_default_local_file; create_default_local_file()"
     python -c "from ltr.admin.environment import create_default_local_file; create_default_local_file()"

    Then install KNN_CUDA

Training

  1. One-stream tracker: CEUTrack

    cd CEUTrack
    sh train.sh
  2. Two-streams tracker: MonTrack download SwinV2 Tiny/Base and put them into

    ./ltr/checkpoint

    Then run the following code

    cd ./MonTrack/ltr
    sh train.sh

Evaluation



Download pretrained weights Google Drive baidu:coming soon

  1. One stream tracker: MonTrack

    sh eval.sh

    Then install KNN_CUDA

  2. Two-streams tracker: CEUTrack

    sh eval.sh

(back to top)

License

Distributed under the MIT License. See LICENSE.txt for more information.

(back to top)

Contact

Email - Zhu Zhiyu

Homepage: Page / Scholar

(back to top)

Acknowledgments

Thanks to FE108, COESOT datasets, TransT and OsTrack.

If you find the project is interesting, please cite

@article{zhu2023cross,
title={Cross-modal Orthogonal High-rank Augmentation for RGB-Event Transformer-trackers},
author={Zhu, Zhiyu and Hou, Junhui and Wu, Dapeng Oliver},
journal={International Conference on Computer Vision},
year={2023}
}
@article{zhu2022learning,
title={Learning Graph-embedded Key-event Back-tracing for Object Tracking in Event Clouds},
author={Zhu, Zhiyu and Hou, Junhui and Lyu, Xianqiang},
journal={Advances in Neural Information Processing Systems},
volume={35},
pages={7462--7476},
year={2022}
}

Template from othneildrew.

(back to top)

About

Code of ICCV 2023 paper Cross-modal Orthogonal High-rank Augmentation for RGB-Event Transformer-trackers

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published