Skip to content

alperak/yolov11-12-opencv-cuda-cpp

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Real Time Object Detection using YOLOv11/12, OpenCV with CUDA backend in C++

C++ OpenCV CUDA ONNX YOLOv11

Project overview

This implementation is designed as a demo to test COCO trained YOLOv11/12 detection models.

This project implements YOLOv11/12 object detection in C++ using OpenCV with CUDA backend. It supports real time inference on NVIDIA GPUs and uses an ONNX model (e.g, yolov11s.onnx, yolov12s.onnx).

For simplicity, it runs entirely on the main thread, so separate threads for camera capture, inference and drawing aren't used.

The focus is on clarity and educational value.

Demo

A demo captured using YOLOV11x:

demo

Tested Environment

This project has been successfully tested with the:

  • Ubuntu: 24.04.2 LTS
  • OpenCV and OpenCV Contrib: 4.12.0
  • CUDA: 12.9.1
  • cuDNN: 8.9.7
  • CMake: 3.10+

Quick Start

(Note: The code doesn't currently get configuration input from the user so need to update code and build if you need different configurations. It would be nice if someone does :) )

Before building, you may need to update the following lines to suit your configuration:

// Initialize the detector with model path, model input size, labels text path, thresholds
// and inference target(GPU/CPU)

Inference detector("../model/yolo11s.onnx", cv::Size(640, 640), "../model/labels.txt",
                    { .modelScoreThreshold = 0.45f, .modelNMSThreshold = 0.50f },
                    InferenceTarget::GPU);

// Open camera device
// Your camera device id may be different so use 'v4l2-ctl --list-devices' on terminal
// and update it to 1 or what ever available for you.
cv::VideoCapture cap(0);

After the configurations update, you can follow to the build step:

# Fetch the project
git clone https://github.com/alperak/yolov11-12-opencv-cuda-cpp.git
cd yolov11-12-opencv-cuda-cpp

# Create build directory
mkdir build && cd build

# Configure with CMake (If you want documentation, enable -DBUILD_DOCS=ON and Doxygen must be installed.)
cmake ..

# Build the project
make -j$(nproc)

# Run the executable
./yolo-inference

Download YOLOv11/12 Models

If you want to try models other than the YOLOv11s in the project, download pretrained YOLOv11/12 models from Ultralytics:

Model Size
(pixels)
mAPval
50-95
Speed
CPU ONNX
(ms)
Speed
T4 TensorRT10
(ms)
Params
(M)
FLOPs
(B)
YOLOv11n 640 39.5 56.1 ± 0.8 1.5 ± 0.0 2.6 6.5
YOLOv11s 640 47.0 90.0 ± 1.2 2.5 ± 0.0 9.4 21.5
YOLOv11m 640 51.5 183.2 ± 2.0 4.7 ± 0.1 20.1 68.0
YOLOv11l 640 53.4 238.6 ± 1.4 6.2 ± 0.1 25.3 86.9
YOLOv11x 640 54.7 462.8 ± 6.7 11.3 ± 0.2 56.9 194.9
Model Size
(pixels)
mAPval
50-95
Speed
CPU ONNX
(ms)
Speed
T4 TensorRT
(ms)
Params
(M)
FLOPs
(B)
Comparison
(mAP/Speed)
YOLO12n 640 40.6 - 1.64 2.6 6.5 +2.1%/-9%
(vs. YOLOv10n)
YOLO12s 640 48.0 - 2.61 9.3 21.4 +0.1%/+42%
(vs. RT-DETRv2)
YOLO12m 640 52.5 - 4.86 20.2 67.5 +1.0%/-3%
(vs. YOLOv11m)
YOLO12l 640 53.7 - 6.77 26.4 88.9 +0.4%/-8%
(vs. YOLOv11l)
YOLO12x 640 55.2 - 11.79 59.1 199.0 +0.6%/-4%
(vs. YOLOv11x)

Convert PyTorch Models to ONNX

YOLOv11s already converted and available in the project but if you want to use another models, you can convert by following the steps:

# Install Ultralytics package
pip install ultralytics

# For example, we want to try `YOLOv11x` so need to download the `YOLOv11x.pt` into our model directory
# and convert model to `ONNX` format like this:

python convert_pt_to_onnx_model.py yolo11x.pt 

You should see this output when you try YOLOv11x:

python convert_pt_to_onnx_model.py yolo11x.pt
Ultralytics 8.3.202 🚀 Python-3.12.3 torch-2.8.0+cu128 CPU (Intel Core i7-8700K 3.70GHz)
YOLO11x summary (fused): 190 layers, 56,919,424 parameters, 0 gradients, 194.9 GFLOPs

PyTorch: starting from 'yolo11x.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 84, 8400) (109.3 MB)

ONNX: starting export with onnx 1.19.0 opset 19...
ONNX: slimming with onnxslim 0.1.68...
ONNX: export success ✅ 3.7s, saved as 'yolo11x.onnx' (217.5 MB)

Export complete (5.5s)
Results saved to /home/alper/cpp-projects/YOLOv11-OpenCV-CUDA-Cpp/model
Predict:         yolo predict task=detect model=yolo11x.onnx imgsz=640  
Validate:        yolo val task=detect model=yolo11x.onnx imgsz=640 data=/ultralytics/ultralytics/cfg/datasets/coco.yaml  
Visualize:       https://netron.app

About

C++ Real time Object Detection using YOLOv11/12, OpenCV with CUDA backend on NVIDIA GPUs.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published