This repository serves as an example of deploying the YOLO models on Triton Server for performance and testing purposes
-
Updated
May 23, 2024 - Shell
This repository serves as an example of deploying the YOLO models on Triton Server for performance and testing purposes
This repository provides an out-of-the-box deployment solution for creating an end-to-end procedure to train, deploy, and use Yolov7 models on Nvidia GPUs using Triton Server and Deepstream.
Shell scripts to run Deepstream on local Ubuntu machine
This demonstrated integration Nvidia Deepstream, AWS IoT Core, and AWS IoT Greengrass on the Jetson Nano Device.
Add a description, image, and links to the deepstream topic page so that developers can more easily learn about it.
To associate your repository with the deepstream topic, visit your repo's landing page and select "manage topics."