part 1. Introduction [代码剖析]
Implementation of YOLO v3 object detector in Tensorflow. The full details are in this paper. In this project we cover several segments(部分) as follows:
- YOLO v3 architecture
- Training tensorflow-yolov3 with GIOU loss function
- Basic working demo
- Training pipeline
- Multi-scale training method
- Compute VOC mAP
YOLO paper is quite hard to understand, along side that paper. This repo enables you to have a quick understanding of YOLO Algorithmn.
- Clone this file
$ git clone https://github.com/YunYang1994/tensorflow-yolov3.git
- You are supposed to install some dependencies before getting out hands with these codes.
$ cd tensorflow-yolov3
$ pip install -r ./docs/requirements.txt
- Exporting loaded COCO weights as TF checkpoint(
yolov3_coco.ckpt
)
$ cd checkpoint
$ wget https://github.com/YunYang1994/tensorflow-yolov3/releases/download/v1.0/yolov3_coco.tar.gz
$ tar -xvf yolov3_coco.tar.gz
$ cd ..
$ python convert_weight.py
$ python freeze_graph.py
- Then you will get some
.pb
files in the root path., and run the demo script
$ python image_demo.py
$ python video_demo.py # if use camera, set video_path = 0
Two files are required as follows:
xxx/xxx.jpg 18.19,6.32,424.13,421.83,20 323.86,2.65,640.0,421.94,20
xxx/xxx.jpg 48,240,195,371,11 8,12,352,498,14
# image_path x_min, y_min, x_max, y_max, class_id x_min, y_min ,..., class_id
person
bicycle
car
...
toothbrush
To help you understand my training process, I made this demo of training VOC PASCAL dataset
Download VOC PASCAL trainval and test data
$ wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar
$ wget http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar
$ wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtest_06-Nov-2007.tar
Extract all of these tars into one directory and rename them, which should have the following basic structure.
VOC # path: /home/yang/test/VOC/
├── test
| └──VOCdevkit
| └──VOC2007 (from VOCtest_06-Nov-2007.tar)
└── train
└──VOCdevkit
├──VOC2007 (from VOCtrainval_06-Nov-2007.tar)
└──VOC2012 (from VOCtrainval_11-May-2012.tar)
$ python scripts/voc_annotation.py --data_path /home/yang/test/VOC
Then edit your ./core/config.py
to make some necessary configurations
__C.YOLO.CLASSES = "./data/classes/voc.names"
__C.TRAIN.ANNOT_PATH = "./data/dataset/voc_train.txt"
__C.TEST.ANNOT_PATH = "./data/dataset/voc_test.txt"
Here are two kinds of training method:
$ python train.py
$ tensorboard --logdir ./data
$ cd checkpoint
$ wget https://github.com/YunYang1994/tensorflow-yolov3/releases/download/v1.0/yolov3_coco.tar.gz
$ tar -xvf yolov3_coco.tar.gz
$ cd ..
$ python convert_weight.py --train_from_coco
$ python train.py
edit your ./core/config.py
to make some necessary configurations, the weight file path is the one that you want to test from what we generated in the previous step.
__C.TEST.WEIGHT_FILE = "./checkpoint/yolov3_test_loss=9.2099.ckpt-5"
$ python evaluate.py
$ cd mAP
$ python main.py -na
if you are still unfamiliar with training pipline, you can join here to discuss with us.
Download COCO trainval and test data
$ wget http://images.cocodataset.org/zips/train2017.zip
$ wget http://images.cocodataset.org/annotations/annotations_trainval2017.zip
$ wget http://images.cocodataset.org/zips/test2017.zip
$ wget http://images.cocodataset.org/annotations/image_info_test2017.zip
YOLO stands for You Only Look Once. It's an object detector that uses features learned by a deep convolutional neural network to detect an object. Although we has successfully run these codes, we must understand how YOLO works.
The paper suggests to use clustering on bounding box shape to find the good anchor box specialization suited for the data. more details see here
In this project, I use the pretrained weights, where we have 80 trained yolo classes (COCO dataset), for recognition. And the class label is represented as c
and it's integer from 1 to 80, each number represents the class label accordingly. If c=3
, then the classified object is a car
. The image features learned by the deep convolutional layers are passed onto a classifier and regressor which makes the detection prediction.(coordinates of the bounding boxes, the class label.. etc).details also see in the below picture. (thanks Levio for your great image!)
- input : [None, 416, 416, 3]
- output : confidece of an object being present in the rectangle, list of rectangles position and sizes and classes of the objects begin detected. Each bounding box is represented by 6 numbers
(Rx, Ry, Rw, Rh, Pc, C1..Cn)
as explained above. In this case n=80, which means we havec
as 80-dimensional vector, and the final size of representing the bounding box is 85.The first numberPc
is the confidence of an project, The second four numberbx, by, bw, bh
represents the information of bounding boxes. The last 80 number each is the output probability of corresponding-index class.
The output result may contain several rectangles that are false positives or overlap, if your input image size of [416, 416, 3]
, you will get (52X52+26X26+13X13)x3=10647
boxes since YOLO v3 totally uses 9 anchor boxes. (Three for each scale). So It is time to find a way to reduce them. The first attempt to reduce these rectangles is to filter them by score threshold.
Input arguments:
boxes
: tensor of shape [10647, 4]scores
: tensor of shape[10647, 80]
containing the detection scores for 80 classes.score_thresh
: float value , then get rid of whose boxes with low score
# Step 1: Create a filtering mask based on "box_class_scores" by using "threshold".
score_thresh=0.4
mask = tf.greater_equal(scores, tf.constant(score_thresh))
Even after yolo filtering by thresholding over, we still have a lot of overlapping boxes. Second approach and filtering is Non-Maximum suppression algorithm.
- Discard all boxes with
Pc <= 0.4
- While there are any remaining boxes :
- Pick the box with the largest
Pc
- Output that as a prediction
- Discard any remaining boxes with
IOU>=0.5
with the box output in the previous step
- Pick the box with the largest
In tensorflow, we can simply implement non maximum suppression algorithm like this. more details see here
for i in range(num_classes):
tf.image.non_max_suppression(boxes, score[:,i], iou_threshold=0.5)
Non-max suppression uses the very important function called "Intersection over Union", or IoU. Here is an exmaple of non maximum suppression algorithm: on input the aglorithm receive 4 overlapping bounding boxes, and the output returns only one
If you want more details, read the fucking source code and original paper or contact with me!
-YOLOv3目标检测有了TensorFlow实现,可用自己的数据来训练
- Implementing YOLO v3 in Tensorflow (TF-Slim)