Submission for Nvidia's Jetson AI Specialist
-
Updated
Mar 21, 2022 - Cuda
Submission for Nvidia's Jetson AI Specialist
The code of YOLOv5 inferencing with TensorRT C++ api is packaged into a dynamic link library , then called through Python.
Speed up image preprocess with cuda when handle image or tensorrt inference
The deployment of Yolov8-seg on Jetson AGX Xavier(带低光照补偿的yolov8检测分割模型)
Using TensorRT for Inference Model Deployment.
Add a description, image, and links to the tensorrt topic page so that developers can more easily learn about it.
To associate your repository with the tensorrt topic, visit your repo's landing page and select "manage topics."