Deep Learning API and Server in C++14 support for PyTorch,TensorRT, Dlib, NCNN, Tensorflow, XGBoost and TSNE
-
Updated
Mar 6, 2025 - C++
Deep Learning API and Server in C++14 support for PyTorch,TensorRT, Dlib, NCNN, Tensorflow, XGBoost and TSNE
FastFlowNet: A Lightweight Network for Fast Optical Flow Estimation (ICRA 2021)
BEVDet implemented by TensorRT, C++; Achieving real-time performance on Orin
Based on tensorrt v8.0+, deploy detection, pose, segment, tracking of YOLO11 with C++ and python api.
Deploy stable diffusion model with onnx/tenorrt + tritonserver
NVIDIA-accelerated DNN model inference ROS 2 packages using NVIDIA Triton/TensorRT for both Jetson and x86_64 with CUDA-capable GPU
ComfyUI Depth Anything (v1/v2) Tensorrt Custom Node (up to 14x faster)
Based on tensorrt v8.0+, deploy detect, pose, segment, tracking of YOLOv8 with C++ and python api.
Анализ трафика на круговом движении с использованием компьютерного зрения
The YOLOv11 C++ TensorRT Project in C++ and optimized using NVIDIA TensorRT
Yolov5 TensorRT Implementations
Using TensorRT for Inference Model Deployment.
you can use dbnet to detect word or bar code,Knowledge Distillation is provided,also python tensorrt inference is provided.
Production-ready YOLO8 Segmentation deployment with TensorRT and ONNX support for CPU/GPU, including AI model integration guidance for Unitlab Annotate.
Base on tensorrt version 8.2.4, compare inference speed for different tensorrt api.
this is a tensorrt version unet, inspired by tensorrtx
C++ TensorRT Implementation of NanoSAM
VitPose without MMCV dependencies
3d object detection model smoke c++ inference code
Add a description, image, and links to the tensorrt-inference topic page so that developers can more easily learn about it.
To associate your repository with the tensorrt-inference topic, visit your repo's landing page and select "manage topics."