Home

Awesome

<div align="center"> <img src="docs/imgs/Title.jpg" />

NanoDet-Plus

Super fast and high accuracy lightweight anchor-free object detection model. Real-time on mobile devices.

CI testing Codecov GitHub license Github downloads GitHub release (latest by date)

</div>

Introduction

NanoDet is a FCOS-style one-stage anchor-free object detection model which using Generalized Focal Loss as classification and regression loss.

In NanoDet-Plus, we propose a novel label assignment strategy with a simple assign guidance module (AGM) and a dynamic soft label assigner (DSLA) to solve the optimal label assignment problem in lightweight model training. We also introduce a light feature pyramid called Ghost-PAN to enhance multi-layer feature fusion. These improvements boost previous NanoDet's detection accuracy by 7 mAP on COCO dataset.

NanoDet-Plus 知乎中文介绍

NanoDet 知乎中文介绍

QQ交流群:908606542 (答案:炼丹)


Benchmarks

ModelResolutionmAP<sup>val<br>0.5:0.95CPU Latency<sup><br>(i7-8700)ARM Latency<sup><br>(4xA76)FLOPSParamsModel Size
NanoDet-m320*32020.64.98ms10.23ms0.72G0.95M1.8MB(FP16) | 980KB(INT8)
NanoDet-Plus-m320*32027.05.25ms11.97ms0.9G1.17M2.3MB(FP16) | 1.2MB(INT8)
NanoDet-Plus-m416*41630.48.32ms19.77ms1.52G1.17M2.3MB(FP16) | 1.2MB(INT8)
NanoDet-Plus-m-1.5x320*32029.97.21ms15.90ms1.75G2.44M4.7MB(FP16) | 2.3MB(INT8)
NanoDet-Plus-m-1.5x416*41634.111.50ms25.49ms2.97G2.44M4.7MB(FP16) | 2.3MB(INT8)
YOLOv3-Tiny416*41616.6-37.6ms5.62G8.86M33.7MB
YOLOv4-Tiny416*41621.7-32.81ms6.96G6.06M23.0MB
YOLOX-Nano416*41625.8-23.08ms1.08G0.91M1.8MB(FP16)
YOLOv5-n640*64028.4-44.39ms4.5G1.9M3.8MB(FP16)
FBNetV5320*64030.4--1.8G--
MobileDet320*32025.6--0.9G--

Download pre-trained models and find more models in Model Zoo or in Release Files

<details> <summary>Notes (click to expand)</summary> </details>

NEWS!!!

Find more update notes in Update notes.

Demo

Android demo

android_demo

Android demo project is in demo_android_ncnn folder. Please refer to Android demo guide.

Here is a better implementation 👉 ncnn-android-nanodet

NCNN C++ demo

C++ demo based on ncnn is in demo_ncnn folder. Please refer to Cpp demo guide.

MNN demo

Inference using Alibaba's MNN framework is in demo_mnn folder. Please refer to MNN demo guide.

OpenVINO demo

Inference using OpenVINO is in demo_openvino folder. Please refer to OpenVINO demo guide.

Web browser demo

https://nihui.github.io/ncnn-webassembly-nanodet/

Pytorch demo

First, install requirements and setup NanoDet following installation guide. Then download COCO pretrain weight from here

👉COCO pretrain checkpoint

The pre-trained weight was trained by the config config/nanodet-plus-m_416.yml.

python demo/demo.py image --config CONFIG_PATH --model MODEL_PATH --path IMAGE_PATH
python demo/demo.py video --config CONFIG_PATH --model MODEL_PATH --path VIDEO_PATH
python demo/demo.py webcam --config CONFIG_PATH --model MODEL_PATH --camid YOUR_CAMERA_ID

Besides, We provide a notebook here to demonstrate how to make it work with PyTorch.


Install

Requirements

Step

  1. Create a conda virtual environment and then activate it.
 conda create -n nanodet python=3.8 -y
 conda activate nanodet
  1. Install pytorch
conda install pytorch torchvision cudatoolkit=11.1 -c pytorch -c conda-forge
  1. Clone this repository
git clone https://github.com/RangiLyu/nanodet.git
cd nanodet
  1. Install requirements
pip install -r requirements.txt
  1. Setup NanoDet
python setup.py develop

Model Zoo

NanoDet supports variety of backbones. Go to the config folder to see the sample training config files.

ModelBackboneResolutionCOCO mAPFLOPSParamsPre-train weight
NanoDet-mShuffleNetV2 1.0x320*32020.60.72G0.95MDownload
NanoDet-Plus-m-320 (NEW)ShuffleNetV2 1.0x320*32027.00.9G1.17MWeight | Checkpoint
NanoDet-Plus-m-416 (NEW)ShuffleNetV2 1.0x416*41630.41.52G1.17MWeight | Checkpoint
NanoDet-Plus-m-1.5x-320 (NEW)ShuffleNetV2 1.5x320*32029.91.75G2.44MWeight | Checkpoint
NanoDet-Plus-m-1.5x-416 (NEW)ShuffleNetV2 1.5x416*41634.12.97G2.44MWeight | Checkpoint

Notice: The difference between Weight and Checkpoint is the weight only provide params in inference time, but the checkpoint contains training time params.

Legacy Model Zoo

ModelBackboneResolutionCOCO mAPFLOPSParamsPre-train weight
NanoDet-m-416ShuffleNetV2 1.0x416*41623.51.2G0.95MDownload
NanoDet-m-1.5xShuffleNetV2 1.5x320*32023.51.44G2.08MDownload
NanoDet-m-1.5x-416ShuffleNetV2 1.5x416*41626.82.42G2.08MDownload
NanoDet-m-0.5xShuffleNetV2 0.5x320*32013.50.3G0.28MDownload
NanoDet-tShuffleNetV2 1.0x320*32021.70.96G1.36MDownload
NanoDet-gCustom CSP Net416*41622.94.2G3.81MDownload
NanoDet-EfficientLiteEfficientNet-Lite0320*32024.71.72G3.11MDownload
NanoDet-EfficientLiteEfficientNet-Lite1416*41630.34.06G4.01MDownload
NanoDet-EfficientLiteEfficientNet-Lite2512*51232.67.12G4.71MDownload
NanoDet-RepVGGRepVGG-A0416*41627.811.3G6.75MDownload

How to Train

  1. Prepare dataset

    If your dataset annotations are pascal voc xml format, refer to config/nanodet_custom_xml_dataset.yml

    Otherwise, if your dataset annotations are YOLO format (Darknet TXT), refer to config/nanodet-plus-m_416-yolo.yml

    Or convert your dataset annotations to MS COCO format(COCO annotation format details).

  2. Prepare config file

    Copy and modify an example yml config file in config/ folder.

    Change save_dir to where you want to save model.

    Change num_classes in model->arch->head.

    Change image path and annotation path in both data->train and data->val.

    Set gpu ids, num workers and batch size in device to fit your device.

    Set total_epochs, lr and lr_schedule according to your dataset and batchsize.

    If you want to modify network, data augmentation or other things, please refer to Config File Detail

  3. Start training

    NanoDet is now using pytorch lightning for training.

    For both single-GPU or multiple-GPUs, run:

    python tools/train.py CONFIG_FILE_PATH
    
  4. Visualize Logs

    TensorBoard logs are saved in save_dir which you set in config file.

    To visualize tensorboard logs, run:

    cd <YOUR_SAVE_DIR>
    tensorboard --logdir ./
    

How to Deploy

NanoDet provide multi-backend C++ demo including ncnn, OpenVINO and MNN. There is also an Android demo based on ncnn library.

Export model to ONNX

To convert NanoDet pytorch model to ncnn, you can choose this way: pytorch->onnx->ncnn

To export onnx model, run tools/export_onnx.py.

python tools/export_onnx.py --cfg_path ${CONFIG_PATH} --model_path ${PYTORCH_MODEL_PATH}

Run NanoDet in C++ with inference libraries

ncnn

Please refer to demo_ncnn.

OpenVINO

Please refer to demo_openvino.

MNN

Please refer to demo_mnn.

Run NanoDet on Android

Please refer to android_demo.


Citation

If you find this project useful in your research, please consider cite:

@misc{=nanodet,
    title={NanoDet-Plus: Super fast and high accuracy lightweight anchor-free object detection model.},
    author={RangiLyu},
    howpublished = {\url{https://github.com/RangiLyu/nanodet}},
    year={2021}
}

Thanks

https://github.com/Tencent/ncnn

https://github.com/open-mmlab/mmdetection

https://github.com/implus/GFocal

https://github.com/cmdbug/YOLOv5_NCNN

https://github.com/rbgirshick/yacs