Home

Awesome

<p align="center"> <img src="assets/banner-YOLO.png" align="middle" width = "1000" /> </p>

English | ็ฎ€ไฝ“ไธญๆ–‡

<br> <div> </a> <a href="https://colab.research.google.com/github/meituan/YOLOv6/blob/main/turtorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> <a href="https://www.kaggle.com/code/housanduo/yolov6"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a> </div> <br>

YOLOv6

Implementation of paper:

<p align="center"> <img src="assets/speed_comparision_v3.png" align="middle" width = "1000" /> </p>

What's New

Benchmark

ModelSizemAP<sup>val<br/>0.5:0.95Speed<sup>T4<br/>trt fp16 b1 <br/>(fps)Speed<sup>T4<br/>trt fp16 b32 <br/>(fps)Params<br/><sup> (M)FLOPs<br/><sup> (G)
YOLOv6-N64037.577911874.711.4
YOLOv6-S64045.033948418.545.3
YOLOv6-M64050.017522634.985.8
YOLOv6-L64052.89811659.6150.7
YOLOv6-N6128044.922828110.449.8
YOLOv6-S6128050.39810841.4198.0
YOLOv6-M6128055.2475579.6379.5
YOLOv6-L6128057.22629140.4673.4
<details> <summary>Table Notes</summary> </details> <details> <summary>Legacy models</summary>
ModelSizemAP<sup>val<br/>0.5:0.95Speed<sup>T4<br/>trt fp16 b1 <br/>(fps)Speed<sup>T4<br/>trt fp16 b32 <br/>(fps)Params<br/><sup> (M)FLOPs<br/><sup> (G)
YOLOv6-N64035.9<sup>300e</sup><br/>36.3<sup>400e80212344.311.1
YOLOv6-T64040.3<sup>300e</sup><br/>41.1<sup>400e44965915.036.7
YOLOv6-S64043.5<sup>300e</sup><br/>43.8<sup>400e35849517.244.2
YOLOv6-M64049.517923334.382.2
YOLOv6-L-ReLU64051.711314958.5144.0
YOLOv6-L64052.59812158.5144.0

Quantized model ๐Ÿš€

ModelSizePrecisionmAP<sup>val<br/>0.5:0.95Speed<sup>T4<br/>trt b1 <br/>(fps)Speed<sup>T4<br/>trt b32 <br/>(fps)
YOLOv6-N RepOpt640INT834.811141828
YOLOv6-N640FP1635.98021234
YOLOv6-T RepOpt640INT839.87411167
YOLOv6-T640FP1640.3449659
YOLOv6-S RepOpt640INT843.3619924
YOLOv6-S640FP1643.5377541
</details>

Mobile Benchmark

ModelSizemAP<sup>val<br/>0.5:0.95sm8350<br/><sup>(ms)mt6853<br/><sup>(ms)sdm660<br/><sup>(ms)Params<br/><sup> (M)FLOPs<br/><sup> (G)
YOLOv6Lite-S320*32022.47.9911.9941.860.550.56
YOLOv6Lite-M320*32025.19.0813.2747.950.790.67
YOLOv6Lite-L320*32028.011.3716.2061.401.090.87
YOLOv6Lite-L320*19225.07.029.6636.131.090.52
YOLOv6Lite-L224*12818.93.634.9917.761.090.24
<details> <summary>Table Notes</summary> </details>

Quick Start

<details> <summary> Install</summary>
git clone https://github.com/meituan/YOLOv6
cd YOLOv6
pip install -r requirements.txt
</details> <details> <summary> Reproduce our results on COCO</summary>

Please refer to Train COCO Dataset.

</details> <details open> <summary> Finetune on custom data</summary>

Single GPU

# P5 models
python tools/train.py --batch 32 --conf configs/yolov6s_finetune.py --data data/dataset.yaml --fuse_ab --device 0
# P6 models
python tools/train.py --batch 32 --conf configs/yolov6s6_finetune.py --data data/dataset.yaml --img 1280 --device 0

Multi GPUs (DDP mode recommended)

# P5 models
python -m torch.distributed.launch --nproc_per_node 8 tools/train.py --batch 256 --conf configs/yolov6s_finetune.py --data data/dataset.yaml --fuse_ab --device 0,1,2,3,4,5,6,7
# P6 models
python -m torch.distributed.launch --nproc_per_node 8 tools/train.py --batch 128 --conf configs/yolov6s6_finetune.py --data data/dataset.yaml --img 1280 --device 0,1,2,3,4,5,6,7
โ”œโ”€โ”€ coco
โ”‚   โ”œโ”€โ”€ annotations
โ”‚   โ”‚   โ”œโ”€โ”€ instances_train2017.json
โ”‚   โ”‚   โ””โ”€โ”€ instances_val2017.json
โ”‚   โ”œโ”€โ”€ images
โ”‚   โ”‚   โ”œโ”€โ”€ train2017
โ”‚   โ”‚   โ””โ”€โ”€ val2017
โ”‚   โ”œโ”€โ”€ labels
โ”‚   โ”‚   โ”œโ”€โ”€ train2017
โ”‚   โ”‚   โ”œโ”€โ”€ val2017
โ”‚   โ”œโ”€โ”€ LICENSE
โ”‚   โ”œโ”€โ”€ README.txt

YOLOv6 supports different input resolution modes. For details, see How to Set the Input Size.

</details> <details> <summary>Resume training</summary>

If your training process is corrupted, you can resume training by

# single GPU training.
python tools/train.py --resume

# multi GPU training.
python -m torch.distributed.launch --nproc_per_node 8 tools/train.py --resume

Above command will automatically find the latest checkpoint in YOLOv6 directory, then resume the training process.

Your can also specify a checkpoint path to --resume parameter by

# remember to replace /path/to/your/checkpoint/path to the checkpoint path which you want to resume training.
--resume /path/to/your/checkpoint/path

This will resume from the specific checkpoint you provide.

</details> <details open> <summary> Evaluation</summary>

Reproduce mAP on COCO val2017 dataset with 640ร—640 or 1280x1280 resolution

# P5 models
python tools/eval.py --data data/coco.yaml --batch 32 --weights yolov6s.pt --task val --reproduce_640_eval
# P6 models
python tools/eval.py --data data/coco.yaml --batch 32 --weights yolov6s6.pt --task val --reproduce_640_eval --img 1280
</details> <details> <summary>Inference</summary>

First, download a pretrained model from the YOLOv6 release or use your trained model to do inference.

Second, run inference with tools/infer.py

# P5 models
python tools/infer.py --weights yolov6s.pt --source img.jpg / imgdir / video.mp4
# P6 models
python tools/infer.py --weights yolov6s6.pt --img 1280 1280 --source img.jpg / imgdir / video.mp4

If you want to inference on local camera or web camera, you can run:

# P5 models
python tools/infer.py --weights yolov6s.pt --webcam --webcam-addr 0
# P6 models
python tools/infer.py --weights yolov6s6.pt --img 1280 1280 --webcam --webcam-addr 0

webcam-addr can be local camera number id or rtsp address.

</details> <details> <summary> Deployment</summary> </details> <details open> <summary> Tutorials</summary> </details> <details> <summary> Third-party resources</summary>

FAQ๏ผˆContinuously updated๏ผ‰

If you have any questions, welcome to join our WeChat group to discuss and exchange.

<p align="center"> <img src="assets/wechat_qrcode.png" align="middle" width = "1000" /> </p>