real time object detection and tracking
usage:
# set up env
poetry env use python3.7
poetry install
# run tracking with preview; if model doesn't exist it'll be downloaded
poetry run yolo_track --source test/ducks1.mp4 --yolo_model models/yolov5s6.pt --show-vid
# run tracking and stream output to file
poetry run yolo_track --source test/ducks1.mp4 --yolo_model models/yolov5s6.pt --save-txt --out-txt /tmp/track1.txt
# run tracking and stream output to rabbitmq
poetry run yolo_track --source ./test/ducks1.mp4 --yolo_model models/yolov5m6.pt --log-rmq --out-rmq example.com:5671,yolo1,user:pass
# youtube livestream
poetry run yolo_track --source "$(youtube-dl -f 'bestvideo[height<=480]+bestaudio/best[height<=480]' -g 'https://www.youtube.com/watch?v=JJqXeRFsLjE')" --yolo_model models/yolov5m6.pt --save-txt --out-txt /tmp/obj1.txt
# youtube livestream with yt-dlp
poetry run yolo_track --source "$(yt-dlp -f 'bestvideo[height<=480]+bestaudio/best[height<=480]' -g 'https://www.youtube.com/watch?v=s4SiFUNYdTs' | head -n 1)" --yolo_model models/yolov5n6.pt --save-txt --out-txt /tmp/obj1.txt
within docker:
# basic test
mkdir -p /tmp/yolo && echo -n > /tmp/yolo/obj1.txt && docker run -v /tmp/yolo:/out -it --rm yolo_track:dev yolo_track.track --source 'https://some-website.domain/some_stream.ts' --yolo_model models/yolov5n6.pt --save-txt --out-txt /out/obj1.txt --frames 10
# live tracking and data streaming
podman run -it --rm -v $(pwd)/certs:/certs xdrie/yolo_track:v0.6 yolo_track.track --source <stream_url> --yolo_model models/yolov5n6.pt --deep_sort_model osnet_ain_x0_5 --log-rmq --out-rmq <rmq_connstr>
to get more models, just
This repository contains a highly configurable two-stage-tracker that adjusts to different deployment scenarios. The detections generated by YOLOv5, a family of object detection architectures and models pretrained on the COCO dataset, are passed to a Deep Sort algorithm which tracks the objects. It can track any object that your Yolov5 model was trained to detect.
- Yolov5 training on Custom Data (link to external repository)
- DeepSort deep descriptor training (link to external repository)
- Yolov5 deep_sort pytorch evaluation
- Clone the repository recursively:
git clone --recurse-submodules https://github.com/mikel-brostrom/Yolov5_DeepSort_Pytorch.git
If you already cloned and forgot to use --recurse-submodules
you can run git submodule update --init
- Make sure that you fulfill all the requirements: Python 3.8 or later with all requirements.txt dependencies installed, including torch>=1.7. To install, run:
pip install -r requirements.txt
Tracking can be run on most video formats
$ python track.py --source 0 # webcam
img.jpg # image
vid.mp4 # video
path/ # directory
path/*.jpg # glob
'https://youtu.be/Zgi9g1ksQHc' # YouTube
'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
There is a clear trade-off between model inference speed and accuracy. In order to make it possible to fulfill your inference speed/accuracy needs you can select a Yolov5 family model for automatic download
$ python track.py --source 0 --yolo_model yolov5n.pt --img 640
yolov5s.pt
yolov5m.pt
yolov5l.pt
yolov5x.pt --img 1280
...
Choose a ReID model based on your needs from this ReID model zoo
$ python track.py --source 0 --deep_sort_model osnet_x1_0
nasnsetmobile
resnext101_32x8d
...
By default the tracker tracks all MS COCO classes.
If you only want to track persons I recommend you to get these weights for increased performance
python3 track.py --source 0 --yolo_model yolov5/weights/crowdhuman_yolov5m.pt --classes 0 # tracks persons, only
If you want to track a subset of the MS COCO classes, add their corresponding index after the classes flag
python3 track.py --source 0 --yolo_model yolov5s.pt --classes 16 17 # tracks cats and dogs, only
Here is a list of all the possible objects that a Yolov5 model trained on MS COCO can detect. Notice that the indexing for the classes in this repo starts at zero.
Can be saved to your experiment folder track/expN
by
python3 track.py --source ... --save-txt
If you find this project useful in your research, please consider cite:
@misc{yolov5deepsort2020,
title={Real-time multi-object tracker using YOLOv5 and deep sort},
author={Mikel Broström},
howpublished = {\url{https://github.com/mikel-brostrom/Yolov5_DeepSort_Pytorch}},
year={2020}
}