(请安装 tensorflow-gpu 1.11.0! 因为 Tensorflow 实在太 TM 折腾人了!)
Implementation of YOLO v3 object detector in Tensorflow. The full details are in this paper. In this project we cover several segments as follows:
(在 Tensorflow 中实现 YOLO v3 对象检测。 完整的细节在本文中。 在这个项目中,我们涵盖了以下几个部分:)
YOLO paper is quite hard to understand, along side that paper. This repo enables you to have a quick understanding of YOLO Algorithmn.
(YOLO 论文超难理解。 此库可使您快速了解 YOLO 算法。)
$ git clone https://github.com/YunYang1994/tensorflow-yolov3.git
$ cd tensorflow-yolov3
$ pip install -r ./docs/requirements.txt
yolov3_coco.ckpt
)(将已加载的COCO权重文件导出为Tensorflow的checkpoint文件)$ cd checkpoint
$ wget https://github.com/YunYang1994/tensorflow-yolov3/releases/download/v1.0/yolov3_coco.tar.gz
$ tar -xvf yolov3_coco.tar.gz
$ cd ..
$ python convert_weight.py
$ python freeze_graph.py
(大约生成的.pb文件才是最终识别所需要的,正确生成.pb文件需要正确的.names文件、classes数量要正确,还得修改config.py中的__C.YOLO.CLASSES以及__C.YOLO.ORIGINAL_WEIGHT参数,需要严格按步骤进行,后期,这都需要查看Tensorflow教程。__C.YOLO.DEMO_WEIGHT参数不知道要不要改,可能运行上述指令,它会自动更新。)
.pb
files in the root path., and run the demo script(然后,您将在根路径中获得一些.pb
文件,并运行演示脚本。)$ python image_demo.py
$ python video_demo.py
# if use camera, set video_path = 0
#(如果使用摄像头,将video_demo.py中的video_path的值设置为0,将默认调用电脑或笔记本自带的摄像头)
Two files are required as follows:(需要两个文件,如下所示:)
dataset.txt
:xxx/xxx.jpg 18.19,6.32,424.13,421.83,20 323.86,2.65,640.0,421.94,20
xxx/xxx.jpg 48,240,195,371,11 8,12,352,498,14
# image_path x_min, y_min, x_max, y_max, class_id x_min, y_min ,..., class_id
class.names
:person
bicycle
car
...
toothbrush
To help you understand my training process, I made this demo of training VOC PASCAL dataset)
(为了帮助您理解我的训练过程,我制作了这个训练PASCAL VOC 【 Visual Object Classes 可视化对象类 】 数据集的演示)
Download VOC PASCAL trainval and test data()
(下载PASCAL VOC 训练验证数据集和测试数据集)
$ wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar
$ wget http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar
$ wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtest_06-Nov-2007.tar
Extract all of these tars into one directory and rename them, which should have the following basic structure.
(将所有这些tar包解压到一个目录中并重命名它们,该目录应具有以下基本结构。)
VOC # path: /home/yang/test/VOC/
├── test
| └──VOCdevkit
| └──VOC2007 (from VOCtest_06-Nov-2007.tar)
└── train
└──VOCdevkit
├──VOC2007 (from VOCtrainval_06-Nov-2007.tar)
└──VOC2012 (from VOCtrainval_11-May-2012.tar)
$ python scripts/voc_annotation.py --data_path /home/yang/test/VOC
Then edit your ./core/config.py
to make some necessary configurations
(然后编辑您的./ core / config.py
进行一些必要的配置)
__C.YOLO.CLASSES = "./data/classes/voc.names"
__C.TRAIN.ANNOT_PATH = "./data/dataset/voc_train.txt"
__C.TEST.ANNOT_PATH = "./data/dataset/voc_test.txt"
Here are two kinds of training method:
(这有两种训练方法:)
(不使用预训练模型进行训练)
$ python train.py
$ tensorboard --logdir ./data
(使用COCO权重文件作为预训练模型进行训练【推荐】)
$ cd checkpoint
$ wget https://github.com/YunYang1994/tensorflow-yolov3/releases/download/v1.0/yolov3_coco.tar.gz
$ tar -xvf yolov3_coco.tar.gz
$ cd ..
$ python convert_weight.py --train_from_coco
$ python train.py
执行训练的指令后,就会一直跑一直跑,跑N久,我这用1080Ti显卡跑了好几天,跑到迭代45次(印象好像是)的时候不跑了,在checkpoint
文件夹生成的一堆权重文件有好几十个G。权重文件名上标有损失值loss
,我挑选出损失值较小的权重文件保留,删除了其余的权重文件。(如图:我只保留了yolov3_test_loss=8.4732.ckpt-5
和yolov3_test_loss=7.8837.ckpt-12
)训练过程中会自动更新checkpoint
文件,具体干嘛用的我也不是很清楚。(注意:即使是训练相同的迭代次数,每次训练生成文件的损失值都可能不同。)训练生成的权重文件会在后续的识别中使用。
edit your ./core/config.py
to make some necessary configurations, the weight file path is the one that you want to test from what we generated in the previous step.
(编辑您的./ core / config.py
进行一些必要的配置,权重文件路径就是我们在上一步中生成的权重文件的路径,从中选择一个您想要测试的)
__C.TEST.WEIGHT_FILE = "./checkpoint/yolov3_test_loss=8.4732.ckpt-5"
$ python evaluate.py
$ cd mAP
$ python main.py -na
运行结果:
if you are still unfamiliar with training pipline, you can join here to discuss with us.
(如果您仍然不熟悉驯良流程,可以从这加入与我们讨论。)
Download COCO trainval and test data(下载COCO训练验证数据集以及测试数据集)
$ wget http://images.cocodataset.org/zips/train2017.zip
$ wget http://images.cocodataset.org/annotations/annotations_trainval2017.zip
$ wget http://images.cocodataset.org/zips/test2017.zip
$ wget http://images.cocodataset.org/annotations/image_info_test2017.zip
YOLO stands for You Only Look Once. It’s an object detector that uses features learned by a deep convolutional neural network to detect an object. Although we has successfully run these codes, we must understand how YOLO works.
(YOLO代表您只需看一次【就是说它识别速度很快!】。 它是一种物体检测器,它使用深度卷积神经网络学习的特征来检测物体。 尽管我们已经成功运行了这些代码,但我们必须了解YOLO的工作方式。)
The paper suggests to use clustering on bounding box shape to find the good anchor box specialization suited for the data. more details see here
(本文建议对边界框形状使用聚类,以找到适合数据的良好锚框特化。 更多细节请看[这里])
In this project, I use the pretrained weights, where we have 80 trained yolo classes (COCO dataset), for recognition. And the class label is represented as c
and it’s integer from 1 to 80, each number represents the class label accordingly. If c=3
, then the classified object is a car
. The image features learned by the deep convolutional layers are passed onto a classifier and regressor which makes the detection prediction.(coordinates of the bounding boxes, the class label… etc).details also see in the below picture. (thanks Levio for your great image!)
(在这个项目中,我使用预训练的权重进行识别,在这里我们有80个训练过的yolo类(COCO数据集)。 并且类别标签用c表示,并且是1到80之间的整数,每个数字都相应地代表类别标签。 如果c = 3,则分类对象是汽车。 深度卷积层学习到的图像特征传递到分类器和回归器上,以进行检测预测。(边界区域的坐标,类标签等)。详细信息也请参见下图。 (感谢Levio提供的NB图片!))
(Rx, Ry, Rw, Rh, Pc, C1..Cn)
as explained above. In this case n=80, which means we have c
as 80-dimensional vector, and the final size of representing the bounding box is 85.The first number Pc
is the confidence of an project, The second four number bx, by, bw, bh
represents the information of bounding boxes. The last 80 number each is the output probability of corresponding-index class.bx,by,bw,bh
表示边界框的信息。 每个列表的最后80个是数字对应索引类的输出概率。【不过我在这里有个疑问,Pc是否属于后面80个数字的其中之一?】)The output result may contain several rectangles that are false positives or overlap, if your input image size of [416, 416, 3]
, you will get (52X52+26X26+13X13)x3=10647
boxes since YOLO v3 totally uses 9 anchor boxes. (Three for each scale). So It is time to find a way to reduce them. The first attempt to reduce these rectangles is to filter them by score threshold.
(输出结果可能包含几个假阳性或重叠的矩形,如果您输入的图像尺寸为[416,416,3]
,则由于YOLO v3的总和,您将获得(52X52 + 26X26 + 13X13)x3 = 10647的框。 使用9个锚框。 (每个刻度三个)。 因此,现在是时候找到一种减少它们的方法了。 减少这些矩形的第一个尝试是按得分阈值对其进行过滤。)
Input arguments 输入参数:
boxes
: tensor of shape 形状张量 [10647, 4]scores
: tensor of shape [10647, 80]
containing the detection scores for 80 classes. 形状为[[10647,80]]的张量,包含80个类别的检测分数。score_thresh
: float value , then get rid of whose boxes with low score 浮动值,然后摆脱得分低的框# Step 1: Create a filtering mask based on "box_class_scores" by using "threshold".
# 使用“阈值”基于“ box_class_scores”创建过滤掩码。
score_thresh=0.4
mask = tf.greater_equal(scores, tf.constant(score_thresh))
Even after yolo filtering by thresholding over, we still have a lot of overlapping boxes. Second approach and filtering is Non-Maximum suppression algorithm.
即使在通过阈值进行yolo滤波之后,我们仍然有很多重叠的框。 第二种方法和过滤是非最大抑制算法。
Pc <= 0.4
丢弃所有Pc <= 0.4的盒子Pc
选择具有最大“ Pc”值的盒子IOU>=0.5
with the box output in the previous step 丢弃上一步输出的剩余的盒子中IOU> = 0.5
的盒子In tensorflow, we can simply implement non maximum suppression algorithm like this. more details see here
在tensorflow中,我们可以像这样简单地实现非最大抑制算法。 更多细节请看这里
for i in range(num_classes):
tf.image.non_max_suppression(boxes, score[:,i], iou_threshold=0.5)
Non-max suppression uses the very important function called “Intersection over Union”, or IoU. Here is an exmaple of non maximum suppression algorithm: on input the aglorithm receive 4 overlapping bounding boxes, and the output returns only one
非最大抑制使用非常重要的功能,称为“交并比” **或IoU。 这是非最大抑制算法的一个例子:在输入算法中,接收4个重叠的边界框,而输出仅返回一个
If you want more details, read the fucking source code and original paper or contact with
me!
-YOLOv3目标检测有了TensorFlow实现,可用自己的数据来训练
- Implementing YOLO v3 in Tensorflow (TF-Slim)
- YOLOv3_TensorFlow
- Object Detection using YOLOv2 on Pascal VOC2012
-Understanding YOLO