Skip to content

gthparch/edgeBench

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Edge Bench

This is the benchmarks for the paper Characterizing the Deployment of Deep Neural Networks on Commercial Edge Devices. We are updating the repo to include edgeTPU, TensorRT, and Movidius implementations as well.

You can find the official paper https://ramyadhadidi.github.io/files/iiswc19-edge.pdf.

This work is done at HPArch@GaTech.

Table of Contents

Supported Models

General Framework

PyTorch TensorFlow DarkNet Caffe
ResNet-18 ✔️ ✔️ - -
ResNet-50 ✔️ ✔️ ✔️ ✔️
ResNet-101 ✔️ ✔️ ✔️ ✔️
Xception ✔️ ✔️ - ✔️
MobileNet-v2 ✔️ ✔️ - ✔️
Inception-v4 ✔️ ✔️ - ✔️
AlexNet ✔️ ✔️ ✔️ ✔️
VGG-11 (224x224) ✔️ - - -
VGG-11 (32x32) ✔️ - - -
VGG-16 ✔️ ✔️ ✔️ ✔️
VGG-19 ✔️ ✔️ - ✔️
CifarNet (32x32) ✔️ - - -
SSD MobileNet-v1 ✔️ - - -
YOLOv3 ✔️ - ✔️ -
Tiny YOLO ✔️ ✔️ ✔️ -
C3D ✔️ - - -

Platform-specific Framework

For platform-specific framework, it is really hard to create our own models from scratch, so we use whatever models the vendor provides. We share the link to vendor's model documentations.

TfLite TensorRT Movidius EdgeTPU

Pre-requisites

  • Python >= 3.5
  • CUDA 10.0
  • Python Packages (Versions that we use.)
numpy===1.16.4

# PyTorch
torch===1.1.0
torchvision===0.2.2

# TensorFlow
tensorflow===1.13.1
Keras===2.2.4

PyTorch on Raspberry Pi

We follow this tutorial to compile the PyTorch library from source on Raspberry Pi.

PyTorch on Nvidia Dev Boards

We use the default JetPack library to setup both our dev boards (Nvidia TX2 and Nvidia Nano boards). Nvidia has its pre-built PyTorch wheel here. It has detailed instructions about how to install PyTorch on Nvidia Dev Boards.

TensorFlow on Raspberry Pi

We use pre-built wheel from here for TensorFlow library on Raspberry Pi.

TensorFlow on Nvidia Dev Boards

Same as PyTorch, Nvidia provides detailed instructions here about how to install TensorFlow.

DarkNet

We compile the Darknet framework from source. You can refer more complication details to the website.

For DarkNet GPU support, we change Makefile flags as shown below

GPU=1
ARCH=-gencode arch=compute_62,code=[sm_62,compute_62]

Caffe

We compile the Caffe framework from source following this tutorial. In order to compile pycaffe, we change PYTHON_LIB and PYTHON_INCLUDE flags in the makefile accordingly.

How to Run

PyTorch

cd pytorch
python execute.py --model [model name] --iteration [number of iterations] --cpu [use CPU if set]

TensorFlow

cd tensorflow

# GPU
NVIDIA_VISIBLE_DEVICES=0 python execute.py --model [model name] --iteration [number of iterations]

# CPU
NVIDIA_VISIBLE_DEVICES= python execute.py --model [model name] --iteration [number of iterations]

DarkNet

We use the pre-existing model configurations in DarkNet code base to execute models.

./darknet classifier predict [base label data] [model config] [model weights] [inference data]

You can lookup more details here.

Caffe

The models in Caffe framework are defined as prototxt.

python execute.py --model [model name] --iteration [number of iteration] --cpu [use CPU if set]

About

Benchmarks for Characterizing the Deployment of Deep Neural Networks on Commercial Edge Devices - IISWC'19 Paper

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages