Skip to content

Latest commit

 

History

History
 
 

yolov3-spp

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

yolov3-spp

Currently this is supporting dynamic input shape, if you want to use non-dynamic version, please checkout commit 659fd2b.

The Pytorch implementation is ultralytics/yolov3 archive branch. It provides two trained weights of yolov3-spp, yolov3-spp.pt and yolov3-spp-ultralytics.pt(originally named ultralytics68.pt).

Config

  • Number of classes defined in yololayer.h
  • FP16/FP32 can be selected by the macro in yolov3-spp.cpp
  • GPU id can be selected by the macro in yolov3-spp.cpp
  • NMS thresh in yolov3-spp.cpp
  • BBox confidence thresh in yolov3-spp.cpp
  • MIN and MAX input size defined in yolov3-spp.cpp
  • Optimization width and height for IOptimizationProfile defined in yolov3-spp.cpp

How to Run

  1. generate yolov3-spp_ultralytics68.wts from pytorch implementation with yolov3-spp.cfg and yolov3-spp-ultralytics.pt, or download .wts from model zoo
git clone https://github.com/wang-xinyu/tensorrtx.git
git clone -b archive https://github.com/ultralytics/yolov3.git
// download its weights 'yolov3-spp-ultralytics.pt'
// copy gen_wts.py from tensorrtx/yolov3-spp/ to ultralytics/yolov3/
// go to ultralytics/yolov3/
python gen_wts.py yolov3-spp-ultralytics.pt
// a file 'yolov3-spp_ultralytics68.wts' will be generated.
// the master branch of yolov3 should work, if not, you can checkout 4ac60018f6e6c1e24b496485f126a660d9c793d8
  1. build tensorrtx/yolov3-spp and run
// put yolov3-spp_ultralytics68.wts into tensorrtx/yolov3-spp/
// go to tensorrtx/yolov3-spp/
mkdir build
cd build
cmake ..
make
sudo ./yolov3-spp -s             // serialize model to plan file i.e. 'yolov3-spp.engine'
sudo ./yolov3-spp -d  ../samples // deserialize plan file and run inference, the images in samples will be processed.
  1. check the images generated, as follows. _zidane.jpg and _bus.jpg

More Information

See the readme in home page.