Skip to content

jamessmith90/MobileNet-SSD-TensorRT

 
 

Repository files navigation

MobileNet-SSD-TensorRT

To accelerate mobileNet-ssd with tensorRT

TensorRT-Mobilenet-SSD can run 50fps on jetson tx2


Requierments:

1.tensorRT4

2.cudnn7

3.opencv


Run:

cmake .
make
./build/bin/mobileNet

Reference:

https://github.com/saikumarGadde/tensorrt-ssd-easy

https://github.com/chuanqi305/MobileNet-SSD

I replaced depthwise with group_conv,because group_conv has been optimized in cudnn7

I retrianed mobileNet-SSD,my number of classfication is 5


TODO:

  • To save serialized model
  • To solve the bug of getting different result with same input
  • The bottleneck of time cost lies in the decoding of pictures. "imread" cost too much ,to resolve it.
  • To modify the architecture, decrease the time cost

If want to decrease the time cost of "imread",you could rebuild OpenCV[https://github.com/jetsonhacks/buildOpenCVTX2]

Added producer-consumer

The bug has been fixed

image

About

Accelerate mobileNet-ssd with tensorRT

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C++ 67.7%
  • Cuda 26.5%
  • C 4.1%
  • CMake 1.7%