Skip to content
Yolo v3 framework base on tensorflow, support multiple models, multiple datasets, any number of output layers, any number of anchors, model prune, and portable model to K210 !
Branch: master
Clone or download
Latest commit eacb54d Jul 18, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
asset add demo Jul 18, 2019
data add demo Jul 18, 2019
models init update Jul 9, 2019
tools add demo Jul 18, 2019
yolo3_frame_test_public add demo Jul 18, 2019
.gitignore init update Jul 9, 2019
LICENSE init update Jul 9, 2019
Makefile add anchor num choice Jul 10, 2019
README.md add demo Jul 18, 2019
keras_freeze.py init update Jul 9, 2019
keras_inference.py add print classes Jul 15, 2019
keras_train.py fix train one out layer error Jul 10, 2019
make_anchor_list.py fix train one out layer error Jul 10, 2019
make_voc_list.py init update Jul 9, 2019
requirements.txt update requirments Jul 11, 2019

README.md

[toc]

K210 YOLO V3 framework

This is a clear, extensible yolo v3 framework

  • Real-time display recall and precision
  • Easy to use with other datasets
  • Support multiple model backbones and expand more
  • Support n number of output layers and m anchors
  • Support model weight pruning
  • Portable model to kendryte K210 chip

Training on Voc

Set Environment

Python 3.7.1, Others in requirements.txt

Prepare dataset

first use yolo scripts:

wget https://pjreddie.com/media/files/VOCtrainval_11-May-2012.tar
wget https://pjreddie.com/media/files/VOCtrainval_06-Nov-2007.tar
wget https://pjreddie.com/media/files/VOCtest_06-Nov-2007.tar
tar xf VOCtrainval_11-May-2012.tar
tar xf VOCtrainval_06-Nov-2007.tar
tar xf VOCtest_06-Nov-2007.tar
wget https://pjreddie.com/media/files/voc_label.py
python3 voc_label.py
cat 2007_train.txt 2007_val.txt 2012_*.txt > train.txt

now you have train.txt, then merge img path and annotation to one npy file:

python3 make_voc_list.py ~/dataset/train.txt data/voc_img_ann.npy

Make anchors

Load the annotations generate anchors:

make anchors DATASET=voc ANCNUM=3

When success you will see figure like this:

NOTE: the kmeans result is random. when you get error , just rerun it.

If you want to use custom dataset, just write script and generate data/{dataset_name}_img_ann.npy, Then use make anchors DATASET=dataset_name. The more options please see with python3 ./make_anchor_list.py -h

If you want to change number of output layer, you should modify OUTSIZE in Makefile

Download pre-trian model

You must download the model weights you want to train because I load the pre-train weights by default.

Put the files into K210_Yolo_framework/data directory.

MODEL DEPTHMUL Url Url
yolo_mobilev1 0.5 google drive weiyun
yolo_mobilev1 0.75 google drive weiyun
yolo_mobilev1 1.0 google drive weiyun
yolo_mobilev2 0.5 google drive weiyun
yolo_mobilev2 0.75 google drive weiyun
yolo_mobilev2 1.0 google drive weiyun
tiny_yolo google drive weiyun
yolo google drive weiyun

NOTE: The mobilenet is not original, I have modified it to fit k210

Train

When you use mobilenet, you need to specify the DEPTHMUL parameter. You don't need set DEPTHMUL to use tiny yolo or yolo.

  1. Set MODEL and DEPTHMUL to start training:

    make train MODEL=xxxx DEPTHMUL=xx MAXEP=10 ILR=0.001 DATASET=voc CLSNUM=20 IAA=False BATCH=16

    You can use Ctrl+C to stop training , it will auto save weights and model in log dir.

  2. Set CKPT to continue training:

    make train MODEL=xxxx DEPTHMUL=xx MAXEP=10 ILR=0.0005 DATASET=voc CLSNUM=20 IAA=False BATCH=16 CKPT=log/xxxxxxxxx/yolo_model.h5
  3. Set IAA to enable data augment:

    make train MODEL=xxxx DEPTHMUL=xx MAXEP=10 ILR=0.0001 DATASET=voc CLSNUM=20 IAA=True BATCH=16 CKPT=log/xxxxxxxxx/yolo_model.h5
  4. Use tensorboard:

    tensorboard --logdir log

NOTE: The more options please see with python3 ./keras_train.py -h

Inference

make inference MODEL=xxxx DEPTHMUL=xx CKPT=log/xxxxxx/yolo_model.h5 IMG=data/people.jpg

You can try with my model :

make inference MODEL=yolo_mobilev1 DEPTHMUL=0.75 CKPT=asset/yolo_model.h5 IMG=data/people.jpg

make inference MODEL=yolo_mobilev1 DEPTHMUL=0.75 CKPT=asset/yolo_model.h5 IMG=data/dog.jpg

NOTE: Since the anchor is randomly generated, your results will be different from the above image.You just need to load this model and continue training for a while.

The more options please see with python3 ./keras_inference.py -h

Prune Model

make train MODEL=xxxx MAXEP=1 ILR=0.0003 DATASET=voc CLSNUM=20 BATCH=16 PRUNE=True CKPT=log/xxxxxx/yolo_model.h5 END_EPOCH=1

When training finish, will save model as log/xxxxxx/yolo_prune_model.h5.

Freeze

toco --output_file mobile_yolo.tflite --keras_model_file log/xxxxxx/yolo_model.h5

Now you have mobile_yolo.tflite

Convert Kmodel

Please refer nncase

Demo

Use Kflash.py download yolo3_frame_test_public/kfpkg/kpu_yolov3.kfpkg to KD233 board:

NOTE: I just use kendryte yolov2 demo code to prove the validity of the model. If you need standard yolov3 region layer code, you can contact me

Caution

  1. Default parameter in Makefile
  2. OBJWEIGHT,NOOBJWEIGHT,WHWEIGHT used to balance precision and recall
  3. Default output two layers,if you want more output layers can modify OUTSIZE
  4. If you want to use the full yolo, you need to modify the IMGSIZE and OUTSIZE in the Makefile to the original yolo parameters
You can’t perform that action at this time.