Home

Awesome

MNN-yolov3

Introduction

MNN demo of YOLOv3(converted from Stronger-Yolo).

Quick Start (cpp)

  1. Install MNN following the corresponding guide.
  2. Setup an environment following Stronger-Yolo.
  3. run v3/pb.py to convert tensorflow checkpoint into portable model.
  4. (optional) Fold constants using TF tools. (Recommended by MNN.)
    bazel-bin/tensorflow/tools/graph_transforms/transform_graph --transforms=fold_constants(ignore_errors=true)
    
  5. Converting model (remember to build convert tools first)
    cd {MNN dir}/tools/converter/build/
    ./MNNConvert -f TF --modelFile {MNN-yolov3 project dir}/v3/port/coco544.pb --MNNModel coco544.mnn --bizCode MNN
    
  6. Copy MNN-demo/yolo.cpp in to {MNN dir}/demo/exec and Modify {MNN dir}/demo/exec/CmakeLists.txt like MNN-demo/CmakeLists.txt.
  7. Run cpp execution.

Quick Start (python) Update: 2019-9-28

  1. Install MNN-python following the corresponding guide.
  2. Setup an environment following Stronger-Yolo.
  3. run v3/pb.py to convert tensorflow checkpoint into portable model.
  4. (optional) Fold constants using TF tools. (Recommended by MNN.)
    bazel-bin/tensorflow/tools/graph_transforms/transform_graph --transforms=fold_constants(ignore_errors=true)
    
  5. Converting model (remember to build convert tools first)
    mnnconvert -f TF --modelFile voc544.pb --MNNModel voc544_python.mnn
    
  6. A python demo is under MNN-demo/demo.py

Quantitative Analysis

Note:
1.Inference time is tested using MNN official Test Tool with scorethreshold 0.2 And 0.7849 is the original tensorflow result.
2.All MAP results are evaluated using the first 300 testing images in order to save time.
3.-quant model is quantized using official MNN tool. The poor inference speed is due to arm-specified optimization. Check this.

ModelInputSizeThreadInference(ms)ParamsMAP(VOC)
Yolov35442/4112/75.126M0.7803(0.7849)
Yolov33202/438.6/24.226M0.7127(0.7249)
Yolov3-quant3202/4316.2/225.26.7M0.7082(0.7249)

Important Notes during model converting

  1. Replace v3/model/head/build_nework with build_nework_MNN, which replaces tf.shape with static inputshape and replace
    [:, tf.newaxis] -> tf.expand_dims // currently strided_slice op is not very well supported in MNN.
    

2. Following this issue to remove/replace some op.
3. Remove condition op which is related to BatchNormalization and training Flag. Otherwise it will cause MNN converting failure. Identity's input node num. != 1

Update: 2019-9-24
Don't bother to adjust op carefully. Just follow this to replace nn.batch_normalization with nn.fused_batch_norm. After this modification we can also merge BN,Relu into convolution directly in MNN.

Qualitative Comparison

TODO

Reference

stronger-yolo

MNN

NCNN