Original image: https://www.flickr.com/photos/nicolelee/19041780
- The input images are directly resized to match the input size of the model. I skipped adding the pad to the input image, it might affect the accuracy of the model if the input image has a different aspect ratio compared to the input size of the model. Always try to get an input size with a ratio close to the input images you will use.
- Check the requirements.txt file.
- For ONNX, if you have a NVIDIA GPU, then install the onnxruntime-gpu, otherwise use the onnxruntime library.
git clone https://github.com/AidenDam/yolov8_inference.git
cd yolov8_inference
pip install -r requirements.txt
For Nvidia GPU computers:
pip install onnxruntime-gpu
Otherwise:
pip install onnxruntime
Use the Google Colab notebook to convert the yolov8 model: , or following this docs.
You can convert the model using the following code after installing ultralitics (pip install ultralytics
):
from ultralytics import YOLO
model = YOLO("yolov8n.pt")
model.export(format="onnx", imgsz=[480,640])
- Image inference:
python image_object_detection.py
- Webcam inference:
python webcam_object_detection.py
- Video inference: https://youtu.be/JShJpg8Mf7M
python video_object_detection.py
Original video: https://youtu.be/Snyg0RqpVxY
AMD platform:
DOCKER_BUILDKIT=1 docker build -t <image_name> -f Dockerfile.amd .
ARM platform or build in Jetson:
If you don't use Jetson, you need change the base runtime image to debian:buster
or another image.
DOCKER_BUILDKIT=1 docker build -t <image_name> -f Dockerfile.jetson .
docker run -it --rm \
--device /dev/video0:/dev/video0 \
--env DISPLAY=$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-v $(pwd):/code \
--runtime nvidia \
--gpus all \
<image_name>
source /venv/bin/activate && python webcam_object_detection.py
- If you encounter with error related to qt.qpa.xcb, try run command bellow before run inference:
xhost +local:docker
- You can also run with my builded image form my docker hub:
Example run docker container in Jetson:docker run -it --rm \ --device /dev/video0:/dev/video0 \ --env DISPLAY=$DISPLAY \ -v /tmp/.X11-unix:/tmp/.X11-unix \ -v $(pwd):/code \ --runtime nvidia \ --gpus all \ aiden827/yolov8_onnx:jetson
- When you use TensorRT to load ONNX model, you should add 2 environment variables
ORT_TENSORRT_ENGINE_CACHE_ENABLE=1
andORT_TENSORRT_CACHE_PATH="/code/cache"
to cache the loaded model from first load, and improve it for next load.
Example run docker container in Jetson:docker run -it --rm \ --device /dev/video0:/dev/video0 \ --env DISPLAY=$DISPLAY \ --env ORT_TENSORRT_ENGINE_CACHE_ENABLE=1 \ --env ORT_TENSORRT_CACHE_PATH="/code/cache" \ -v /tmp/.X11-unix:/tmp/.X11-unix \ -v $(pwd):/code \ --runtime nvidia \ --gpus all \ aiden827/yolov8_onnx:jetson
- YOLOv8 model: https://github.com/ultralytics/ultralytics
- Jetson zoo: https://elinux.org/Jetson_Zoo
- ONNX-YOLOv8-Object-Detection: https://github.com/ibaiGorordo/ONNX-YOLOv8-Object-Detection
- TensorRT Execution Provider: https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html