YoloDotNet is a blazing-fast, fully featured C# library for real-time object detection, obb, segmentation, classification, pose estimation — and tracking — using YOLOv5u–v12, YOLO-World, and YOLO-E models.
Built on .NET 8, powered by ONNX Runtime, and supercharged with GPU acceleration via CUDA, YoloDotNet delivers exceptional speed and flexibility for both image and video processing — with full support for live streams, frame skipping, and custom visualizations.
Yolov5u
Yolov8
Yolov9
Yolov10
Yolov11
Yolov12
Yolo-World
YoloE
✓ Classification
Categorize an image
✓ Object Detection
Detect multiple objects in a single image
✓ OBB Detection
OBB (Oriented Bounding Box)
✓ Segmentation
Separate detected objects using pixel masks
✓ Pose Estimation
Identifying location of specific keypoints in an image
Batteries not included.
Classification | Object Detection | OBB Detection | Segmentation | Pose Estimation |
---|---|---|---|---|
![]() |
![]() |
![]() |
![]() |
![]() |
image from pexels.com | image from pexels.com | image from pexels.com | image from pexels.com | image from pexels.com |
> dotnet add package YoloDotNet
Buckle up — YoloDotNet v3.0 is here, and it’s a beast!
After countless hours of profiling, tightening bolts, and chasing down bottlenecks, this release delivers massive performance gains, streamlined APIs, and cutting-edge model support.
Whether you're streamlining inference pipelines, tracking objects like a pro, or customizing your visual output, this version delivers massive upgrades in speed, flexibility, and usability.
This is the fastest release yet — with performance boosts of up to 70% faster inference and up to 92% lower memory usage, depending on task and hardware. From lightning-fast segmentation to smarter tracking and smoother rendering, v3.0 is built to crush bottlenecks.
YoloDotNet v3.0 | Device | Speed Gain | Memory Reduction |
---|---|---|---|
Segmentation | GPU | up to 70.8% faster | up to 92.7% less memory |
CPU | up to 12.6% faster | up to 92.7% less memory | |
Classification | GPU | up to 28.5% faster | up to 46.1% less memory |
CPU | up to 9.6% faster | up to 46.1% less memory | |
OBB Detection | GPU | up to 28.7% faster | up to 2.2% less memory |
CPU | up to 4.7% faster | up to 2.9% less memory | |
Pose Estimation | GPU | up to 27.6% faster | up to 0.7% less memory |
CPU | up to 4.6% faster | up to 0.7% less memory | |
Detection | GPU | up to 25.0% faster | up to 0.8% less memory |
CPU | up to 5.3% faster | up to 1.0% less memory |
Note: This summary highlights the maximum observed speed and memory improvements between versions. For full benchmark results, hardware used, detailed per-task analysis, and raw data, please see the complete benchmarks.
- Model Support That Keeps You Ahead
- YOLO-E (Zero-Shot Detection) — tap into cutting-edge open-world object
- YOLOv5u (Ultralytics fork of YOLOv5 using the YOLOv8 architecture) is now fully supported.
- Load ONNX models directly from byte[] — ideal for embedded, encrypted, or dynamic loading scenarios.
- Streamlined Inference & Performance
- Simplified inference pipeline — fewer steps, more power, same accuracy.
- Inference can now run directly on SKBitmap and SKImage types (SkiaSharp), unlocking powerful image workflows.
- Improved drawing performance when rendering results onto images or frames.
- Video & Tracking Enhancements
- Video processing has been redesigned and simplified — less boilerplate, more control.
- Split video streams into chunks — perfect for long video processing or batching inference tasks.
- Added SORT object tracking for maintaining object identity across frames. (See demo project)
- Customizable frame intervals — process every frame, or every 2nd, 10th, 30th… you decide. This is perfect for high-frame-rate or surveillance streams where analyzing every single frame isn’t necessary.
- Custom Visualizations
- Choose your own font type for result annotations.
- Define custom colors per class and bounding box — full control over output styling.
- Dependency Updates
- ONNX Runtime 1.22.1
- SkiaSharp 3.119.0
YoloDotNet with GPU-acceleration requires CUDA Toolkit 12.x and cuDNN 9.x.
ONNX runtime's current compatibility with specific versions.
- Install CUDA v12.x
- Install cuDNN v9.x
- Update your system PATH-variable
-
Open File Explorer and navigate to the folder where the cuDNN-dll's are installed. The typical path looks like:
C:\Program Files\NVIDIA\CUDNN\v9.x\bin\v12.x
(where x is your version) -
Once you are in this specific folder (which contains .dll files), copy the folder path from the address bar at the top of the window.
-
Add the cuDNN-Path to your System Variables:
- Type
env
in windows search - Click on
Edit the system environment variables
- Click on
Environment Variables
- Under
System Variables
select thePath
-variable and clickEdit
- Click on
New
and paste in your cuDNN dll-folder path - Click Ok a million times to save the changes
- Type
-
Captain-Obvious-important! For Windows to recognize your new environment variables, be sure to close all open programs before continuing — or just give your system a quick restart. Otherwise, your changes might play hide-and-seek! ;)
All models — including your own custom models — must be exported to the ONNX format with opset 17
for best performance.
For more information on how to export yolo-models to onnx read this guide.
The ONNX-models included in this repo for test and demo purposes are from Ultralytics s-series (small). https://docs.ultralytics.com/models.
Can’t wait to see YoloDotNet in action? The demo projects are the fastest way to get started and explore everything this library can do.
Each demo showcases one of the supported tasks:
- Classification – What is this thing?
- Object Detection – What are all these things?
- OBB Detection – Rotated objects? No problem.
- Segmentation – Color between the lines… automatically.
- Pose Estimation – Find the limbs, strike a pose!
Oh, and it doesn’t stop there — there’s a demo for real-time video inference too! Whether you’re analyzing local video files or streaming live, the demos have you covered.
Each demo is packed with inline comments to help you understand how everything works under the hood. From model setup and preprocessing to video streaming and result rendering — it's all there.
Pro tip: For detailed configuration options and usage guidance, check out the comments in the demo source files. That’s where the real magic happens.
Open the YoloDotNet.Demo projects, build, run, and start detecting at full speed. ✨
Sometimes you just want to see the magic happen without the bells and whistles. Here’s the absolute simplest way to load a model, run inference on an image, and get your detected objects:
using SkiaSharp;
using YoloDotNet;
using YoloDotNet.Enums;
using YoloDotNet.Models;
using YoloDotNet.Extensions;
public class Program
{
static void Main(string[] args)
{
// Fire it up! Create an instance of YoloDotNet and reuse it across your app's lifetime.
// Prefer the 'using' pattern for automatic cleanup if you're done after a single run.
var yolo = new Yolo(new YoloOptions
{
OnnxModel = "model.onnx",
Cuda = true,
PrimeGpu = true,
GpuId = 0,
ImageResize = ImageResize.Proportional
// Choose between Proportional or Stretched resizing.
// Use 'Proportional' if your model was trained with images that preserve aspect ratio (e.g., padded borders).
// Use 'Stretched' if your training data was resized to fit the model's input dimensions directly.
// This setting influence detection accuracy, so be sure it aligns with how the model was trained.
});
// Which YOLO magic is this? Let’s find out!
Console.WriteLine($"Model Type: {yolo.ModelInfo}");
// Load image with SkiaSharp
using var image = SKBitmap.Decode("image.jpg");
// Run object detection with default values
var results = yolo.RunObjectDetection(image, confidence: 0.20, iou = 0.7);
image.Draw(results); // Overlay results on image
image.Save("result.jpg"); // Save to disk – boom, done!
// Clean up – unless you're using 'using' above.
yolo?.Dispose();
}
}
That’s it! No fuss, just fast and easy detection.
Of course, the real power lies in customizing the pipeline, streaming videos, or tweaking models… but this snippet gets you started in seconds.
Want more? Dive into the demos and source code for full examples, from video streams to segmentation and pose estimation.
Want to give your detections a personal touch? Go ahead! If you're drawing bounding boxes on-screen, there’s full flexibility to style them just the way you like:
- Custom Colors – Use the built-in class-specific colors or define your own for every bounding box.
- Font Style & Size – Choose your favorite font, set the size, and even change the color for the labels.
- Custom Fonts – Yep, you can load your own font files to give your overlay a totally unique feel.
If that's not enough, check out the extension methods in the main YoloDotNet repository — a solid boilerplate for building even deeper customizations tailored exactly to your needs.
For practical examples on drawing and customization, don’t forget to peek at the demo project source code too!
YoloDotNet is the result of countless hours of development, testing, and continuous improvement — all offered freely to the community.
If you’ve found this project helpful, consider supporting its development. Your contribution helps cover the time and resources needed to keep the project maintained, updated, and freely available to everyone.
Support the project:
👉 GitHub Sponsor
👉 PayPal
Every donation, no matter the size, is greatly appreciated. Thank you!
https://github.com/ultralytics/ultralytics
https://github.com/sstainba/Yolov8.Net
https://github.com/mentalstack/yolov5-net
YoloDotNet is © 2023 Niklas Swärd (GitHub)
Licensed under the GNU General Public License v3.0 or later.
See the LICENSE file for full license text.
This software is provided "as-is", without warranty of any kind.
The author is not liable for any damages arising from its use.