部署量化库,适合pc,jetson,int8量化, yolov3/v4/v5
-
Updated
Nov 3, 2023 - C++
部署量化库,适合pc,jetson,int8量化, yolov3/v4/v5
C++ implementation of An Improved Association Pipeline for Multi-Person Tracking
C++/C TensorRT Inference Example for models created with Pytorch/JAX/TF
A lightweight, high-performance deep learning inference tool.
A lightweight C++ implementation of YoloV8 running on NVIDIAs TensorRT engine
Based on TensorRT v8.2, build network for YOLOv5-v5.0 by myself, speed up YOLOv5-v5.0 inferencing
Rust GRPC server for face recognition, face detection and face alignment using TensorRT, Cuda on JetPack SDK (Jetson Nano, Jetson Xavier NX)
A tutorial for getting started on running Tensorrt engine and Deep Learning Accelerator (DLA) models on threads
Generating tensorrt model using onnx
An object tracking project with YOLOv5-v5.0 and Deepsort, speed up by C++ and TensorRT.
Export (from Onnx) and Inference TensorRT engine with C++.
C++ TensorRT Implementation of NanoSAM
3d object detection model smoke c++ inference code
Based on tensorrt v8.0+, deploy detect, pose, segment, tracking of YOLOv8 with C++ and python api.
this is a tensorrt version unet, inspired by tensorrtx
NVIDIA-accelerated DNN model inference ROS 2 packages using NVIDIA Triton/TensorRT for both Jetson and x86_64 with CUDA-capable GPU
BEVDet implemented by TensorRT, C++; Achieving real-time performance on Orin
Add a description, image, and links to the tensorrt-inference topic page so that developers can more easily learn about it.
To associate your repository with the tensorrt-inference topic, visit your repo's landing page and select "manage topics."