Skip to content

linghu8812/yolov5_fastreid_deepsort_tensorrt

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Object Tracking with TensorRT

Introduction

This is an implementation for object tracking in cplusplus code. The object detector uses yolov5s model. The idea of deepsort is adopted in object tracking. The implementation of sort refers to the sort-cpp. When extracting object features, it is extracted through the fast-reid trained model, and the person ReID uses mobilenetv2. The purpose of using these lightweight models is to ensure the real-time efficiency of video processing. The model inference base on TensorRT engine.

How to run?

0. Build environments

The TenosrRT environments build from Dockerfile, run with the following command.

docker build -t tensorrt_tracker:0.1.0_rc .

Following yolov5 and fast-reid requirements file to install their depends packages.

1. Transform PyTorch weights to ONNX

  • Transform yolov5 weights

Use this yolov5 repo to transform yolov5 *.pt weights to ONNX models. Run the following command

git clone https://github.com/linghu8812/yolov5.git
python3 export.py ---weights weights/yolov5s.pt --batch-size 1 --imgsz 640 --include onnx --simplify

A pretrained yolov5 ONNX detection model can be downloaded form here, link: https://pan.baidu.com/s/1RUz7Xk78lvKCeZNk_BBvoQ, code: jung. download this model and put it to the weights folder.

  • Transform fastreid weights

Use official fast-reid to transform PyTorch weights to ONNX model. Run the following command

https://github.com/JDAI-CV/fast-reid.git
python3 tools/deploy/onnx_export.py --config-file configs/Market1501/mgn_R50-ibn.yml --name mgn_R50-ibn --output outputs/onnx_model --batch-size 32 --opts MODEL.WEIGHTS market_mgn_R50-ibn.pth

A pretrained fast-reid ONNX detection model can be downloaded form here, link: https://pan.baidu.com/s/19TuHxxuVYLBzie5_Vu0cCQ, code: 1e35. download this model and put it to the weights folder.

2. Get video for inference ready

Put video file for inference to samples folder. Here is a video demo for inference can be used: https://pan.baidu.com/s/1Yyh1lwmzNl_gjvNz9EVI5w, code: fpi0.

3. Build project

Run the following command

git clone git@github.com:linghu8812/tensorrt_tracker.git
mkdir build && cd build
cmake ..
make -j

4. Run project

Run the following command

./object_tracker ../configs/config.yaml ../samples/test.mpg

results demo:

tensorrt_tracker

Reference

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published