Comparing Deep Learning Inference of Pytorch models running on CPU, CUDA and TensorRT
this .ipynb notebook is meant to run in a Torch TensorRT Docker container
or an Nvidia NGC container.
You can find detailed setup instructions is the video above.
author: Mariya Sha
dependencies: Pytorch, Torchvision, Panadas, TorchTensorRT
In this notebook we will run inference with: CPU, CUDA and TensorRT and
compair their speed with a special benchmarking utility function.
We will load a pre-trained Neural Netowork (ResNet50) and we will
use it to predict a never-before-seen picture of my cat.
you can either use my picture (img1.jpg) or you can choose one
from your personal gallery (highly reccomended!)
please feel free to use my code anywhere you'd like, no need to credit me!
howerver - if you do, I'll really appreciate it :)