Deep Learning API and Server in C++14 support for PyTorch,TensorRT, Dlib, NCNN, Tensorflow, XGBoost and TSNE
-
Updated
Jun 16, 2025 - C++
Deep Learning API and Server in C++14 support for PyTorch,TensorRT, Dlib, NCNN, Tensorflow, XGBoost and TSNE
🚀 Easier & Faster YOLO Deployment Toolkit for NVIDIA 🛠️
FastFlowNet: A Lightweight Network for Fast Optical Flow Estimation (ICRA 2021)
BEVDet implemented by TensorRT, C++; Achieving real-time performance on Orin
Based on tensorrt v8.0+, deploy detection, pose, segment, tracking of YOLO11 with C++ and python api.
Deploy stable diffusion model with onnx/tenorrt + tritonserver
NVIDIA-accelerated DNN model inference ROS 2 packages using NVIDIA Triton/TensorRT for both Jetson and x86_64 with CUDA-capable GPU
Based on tensorrt v8.0+, deploy detect, pose, segment, tracking of YOLOv8 with C++ and python api.
ComfyUI Depth Anything (v1/v2) Tensorrt Custom Node (up to 14x faster)
Анализ трафика на круговом движении с использованием компьютерного зрения
The YOLOv11 C++ TensorRT Project in C++ and optimized using NVIDIA TensorRT
Yolov5 TensorRT Implementations
Using TensorRT for Inference Model Deployment.
Base on tensorrt version 8.2.4, compare inference speed for different tensorrt api.
Production-ready YOLO8 Segmentation deployment with TensorRT and ONNX support for CPU/GPU, including AI model integration guidance for Unitlab Annotate.
you can use dbnet to detect word or bar code,Knowledge Distillation is provided,also python tensorrt inference is provided.
VitPose without MMCV dependencies
C++ TensorRT Implementation of NanoSAM
Add a description, image, and links to the tensorrt-inference topic page so that developers can more easily learn about it.
To associate your repository with the tensorrt-inference topic, visit your repo's landing page and select "manage topics."