This is a work in progress, but it works for me... It is rough around the corners and lots of place for improvment - feel free to contribute:
Inference time with yolov2-tiny is 85ms/image (including resizing and NMS and decoding results) on 10W Nano. should be twice as fast on a MAXN mode Nano.
- sudo apt-get install libgflags-dev cmake
- git clone https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps
- export YOLO_ROOT=`pwd`/deepstream_reference_apps/yolo
- cd $YOLO_ROOT/apps/trt-yolo
- edit CMakeLists.txt and :
- change 'set(CMAKE_CXX_FLAGS_RELEASE "-O2")' to 'set(CMAKE_CXX_FLAGS_RELEASE "-O2 -fPIC")'
- add after it a line 'set(CUDA_NVCC_FLAGS "${CUDA_NVCC_FLAGS} --compiler-options -fPIC" )'
- mkdir build && cd build
- cmake -D CMAKE_BUILD_TYPE=Release .. as its read only. You can also chmod +w it
- make && sudo make install
- edit $YOLO-ROOT/config/yolov2-tiny.txt and change all the links to absolute paths (config_file_path, wts_file_path, labels_file_path, test_images)
- check that the cpp app works ok by doing:
- cd $YOLO_ROOT/apps/trt-yolo/build
- put an image or two in $YOLO_ROOT/data/test_images.txt
- ./trt-yolo-app --flagfile=$YOLO_ROOT/config/yolov2-tiny.txt
- cd $YOLO_ROOT/apps/trt-yolo/build/lib
- git clone https://github.com/mosheliv/tensortrt-yolo-python-api
- cd tensortrt-yolo-python-api
- source link_shared.sh
- python t.py --flagfile=$YOLO_ROOT/config/yolov2-tiny.txt your_image.jpg