Skip to content

AidenDam/yolov8_inference

Repository files navigation

! ONNX YOLOv8 Object Detection Original image: https://www.flickr.com/photos/nicolelee/19041780

Important

  • The input images are directly resized to match the input size of the model. I skipped adding the pad to the input image, it might affect the accuracy of the model if the input image has a different aspect ratio compared to the input size of the model. Always try to get an input size with a ratio close to the input images you will use.

Requirements

  • Check the requirements.txt file.
  • For ONNX, if you have a NVIDIA GPU, then install the onnxruntime-gpu, otherwise use the onnxruntime library.

Installation

git clone https://github.com/AidenDam/yolov8_inference.git
cd yolov8_inference
pip install -r requirements.txt

ONNX Runtime

For Nvidia GPU computers: pip install onnxruntime-gpu

Otherwise: pip install onnxruntime

ONNX model

Use the Google Colab notebook to convert the yolov8 model: Open In Colab, or following this docs.

You can convert the model using the following code after installing ultralitics (pip install ultralytics):

from ultralytics import YOLO

model = YOLO("yolov8n.pt") 
model.export(format="onnx", imgsz=[480,640])

Examples

  • Image inference:
python image_object_detection.py
  • Webcam inference:
python webcam_object_detection.py
python video_object_detection.py

!YOLOv8 detection video

Original video: https://youtu.be/Snyg0RqpVxY

Containerization with docker

Build docker image

AMD platform:

DOCKER_BUILDKIT=1 docker build -t <image_name> -f Dockerfile.amd .

ARM platform or build in Jetson:

If you don't use Jetson, you need change the base runtime image to debian:buster or another image.

DOCKER_BUILDKIT=1 docker build -t <image_name> -f Dockerfile.jetson .

Run docker container

docker run -it --rm \
  --device /dev/video0:/dev/video0 \
  --env DISPLAY=$DISPLAY \
  -v /tmp/.X11-unix:/tmp/.X11-unix \
  -v $(pwd):/code \
  --runtime nvidia \
  --gpus all \
  <image_name>

Run inference

source /venv/bin/activate && python webcam_object_detection.py

Notice

  • If you encounter with error related to qt.qpa.xcb, try run command bellow before run inference:
    xhost +local:docker
  • You can also run with my builded image form my docker hub:
    Example run docker container in Jetson:
    docker run -it --rm \
      --device /dev/video0:/dev/video0 \
      --env DISPLAY=$DISPLAY \
      -v /tmp/.X11-unix:/tmp/.X11-unix \
      -v $(pwd):/code \
      --runtime nvidia \
      --gpus all \
      aiden827/yolov8_onnx:jetson
  • When you use TensorRT to load ONNX model, you should add 2 environment variables ORT_TENSORRT_ENGINE_CACHE_ENABLE=1 and ORT_TENSORRT_CACHE_PATH="/code/cache" to cache the loaded model from first load, and improve it for next load.
    Example run docker container in Jetson:
    docker run -it --rm \
      --device /dev/video0:/dev/video0 \
      --env DISPLAY=$DISPLAY \
      --env ORT_TENSORRT_ENGINE_CACHE_ENABLE=1 \
      --env ORT_TENSORRT_CACHE_PATH="/code/cache" \
      -v /tmp/.X11-unix:/tmp/.X11-unix \
      -v $(pwd):/code \
      --runtime nvidia \
      --gpus all \
      aiden827/yolov8_onnx:jetson

References:

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages