FastFlowNet: A Lightweight Network for Fast Optical Flow Estimation (ICRA 2021)
-
Updated
Aug 28, 2023 - Python
FastFlowNet: A Lightweight Network for Fast Optical Flow Estimation (ICRA 2021)
Yolov5 TensorRT Implementations
ComfyUI Depth Anything Tensorrt Custom Node (up to 5x faster)
you can use dbnet to detect word or bar code,Knowledge Distillation is provided,also python tensorrt inference is provided.
Production-ready YOLO8 Segmentation deployment with TensorRT and ONNX support for CPU/GPU, including AI model integration guidance for Unitlab Annotate.
VitPose without MMCV dependencies
The real-time Instance Segmentation Algorithm SparseInst running on TensoRT and ONNX
Base on tensorrt version 8.2.4, compare inference speed for different tensorrt api.
Convert yolo models to ONNX, TensorRT add NMSBatched.
An oriented object detection framework based on TensorRT
Advance inference performance using TensorRT for CRAFT Text detection. Implemented modules to convert Pytorch -> ONNX -> TensorRT, with dynamic shapes (multi-size input) inference.
"Narrative Canvas" project is an edge computing project based on Nvidia Jetson. It can transform uploaded images into captivating stories and artworks.
Export (from Onnx) and Inference TensorRT engine with Python
Dolphin is a python toolkit meant to speed up inference of TensorRT by providing CUDA-Accelerated processing.
This is an mnist example of how to transfer a .pt file to .onnx, then transfer .onnx file to .trt file.
Inference code of `ogata-lab/eipl`. Control robots with machine learning models on edge computer.
不同backend的模型转换与推理代码
YOLOX TensorRT object detection
Convert ONNX models to TensorRT engines and run inference in containerized environments
Add a description, image, and links to the tensorrt-inference topic page so that developers can more easily learn about it.
To associate your repository with the tensorrt-inference topic, visit your repo's landing page and select "manage topics."