Skip to content

Comparing Deep Learning Inference of Pytorch models running on CPU, CUDA and TensorRT

Notifications You must be signed in to change notification settings

MariyaSha/Inference_withTorchTensorRT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 

Repository files navigation

Inference_withTorchTensorRT

Comparing Deep Learning Inference of Pytorch models running on CPU, CUDA and TensorRT

Thumbnail_Torch-TensorRT9

YouTube Tutorial

https://youtu.be/iFADsRDJhDM

Please note

this .ipynb notebook is meant to run in a Torch TensorRT Docker container
or an Nvidia NGC container.
You can find detailed setup instructions is the video above.

About

author: Mariya Sha
dependencies: Pytorch, Torchvision, Panadas, TorchTensorRT

In this notebook we will run inference with: CPU, CUDA and TensorRT and
compair their speed with a special benchmarking utility function.
We will load a pre-trained Neural Netowork (ResNet50) and we will
use it to predict a never-before-seen picture of my cat.
you can either use my picture (img1.jpg) or you can choose one
from your personal gallery (highly reccomended!)

please feel free to use my code anywhere you'd like, no need to credit me!
howerver - if you do, I'll really appreciate it :)

About

Comparing Deep Learning Inference of Pytorch models running on CPU, CUDA and TensorRT

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages