Due to some conflicts in the inference script, we have created a separate branch for ONNX-Runtime and TensorRT-based evaluation. Hence, run the following commands on terminal to switch to the appropriate branch
git clone https://github.com/goutamyg/MVT.git
git checkout --track origin/multi_framework_inference
Introduction: ONNXRUNTIME is an open-source library by Microsoft for network inference acceleration. The accelerated MVT tracker runs at 70fps ⚡ on a 12th Gen Intel(R) Core-i9 CPU.
For ONNX-Runtime-based inference on CPU, install
pip install onnx onnxruntime
Download the onnx model from here
or
Run python tracking/pytorch2onnx.py
to generate the onnx file from the pretrained pytorch model
python tracking/test.py --tracker_name mobilevit_track --tracker_param mobilevit_256_128x1_got10k_ep100_cosine_annealing --dataset got10k_test --backend onnx
Introduction: TensorRT is a high-performance deep learning inference SDK by NVIDIA:registered:. Using TensorRT as the backend, our MVT tracker runs at a speed of ~300fps ⚡⚡ on an NVidia RTX 3090 GPU.
pip install tensorrt
Run python tracking/pytorch2onnx.py
to generate the onnx file,
and then python tracking/onnx2trt.py
to generate TensorRT engine.
python tracking/test.py --tracker_name mobilevit_track --tracker_param mobilevit_256_128x1_got10k_ep100_cosine_annealing --dataset got10k_test --backend tensorrt
Tracker | Source | GOT10K-test (AUC) | Speed (CPU) | Speed (GPU) |
---|---|---|---|---|
MVT | BMVC'23 | 0.633 | ~70 fps | ~300 fps |
Ocean | ECCV'20 | 0.611 | ~10 fps | ~130 fps |
DiMP50 | ICCV'19 | 0.611 | ~15 fps | ~60 fps |
Our MVT tracker achieves better performance than DCF-based DiMP50 and Siamese-based Ocean, while running at least 4.5 and 2.3 times faster on CPU and GPU 🔥, respectively.