Home

Awesome

YOLOv3 in Pytorch

Pytorch implementation of YOLOv3

<p align="left"><img src="data/innsbruck_result.png" height="160"\> <img src="data/mountain_result.png" height="160"\></p>

What's New

Performance

Inference using yolov3.weights

<table><tbody> <tr><th align="left" bgcolor=#f8f8f8> </th> <td bgcolor=white> Original (darknet) </td><td bgcolor=white> Ours (pytorch) </td></tr> <tr><th align="left" bgcolor=#f8f8f8> COCO AP[IoU=0.50:0.95], inference</th> <td bgcolor=white> 0.310 </td><td bgcolor=white> 0.311 </td></tr> <tr><th align="left" bgcolor=#f8f8f8> COCO AP[IoU=0.50], inference</th> <td bgcolor=white> 0.553 </td><td bgcolor=white> 0.558 </td></tr> </table></tbody>

Training

The benchmark results below have been obtained by training models for 500k iterations on the COCO 2017 train dataset using darknet repo and our repo. The models have been evaluated on the COCO 2017 val dataset using our repo.

<table><tbody><tr><th align="left" bgcolor=#f8f8f8> </th> <td bgcolor=white> darknet weights </td><td bgcolor=white> darknet repo </td><td bgcolor=white> Ours (pytorch) </td><td bgcolor=white> Ours (pytorch) </td></tr> <tr><th align="left" bgcolor=#f8f8f8> batchsize </th> <td bgcolor=white> ?? </td><td bgcolor=white> 4 </td><td bgcolor=white> 4 </td> <td bgcolor=white> 8 </td> </tr> <tr><th align="left" bgcolor=#f8f8f8> speed [iter/min](*) </th> <td bgcolor=white> ?? </td><td bgcolor=white> <b>19.2</b> </td><td bgcolor=white> <b>19.4</b> </td> <td bgcolor=white> 21.0 </td> </tr> <tr><th align="left" bgcolor=#f8f8f8> COCO AP[IoU=0.50:0.95], training</th> <td bgcolor=white> 0.311 </td><td bgcolor=white> <b>0.284</b> </td> <td bgcolor=white> <b>0.283</b> </td> <td bgcolor=white> 0.298 </td> </tr> <tr><th align="left" bgcolor=#f8f8f8> COCO AP[IoU=0.50], training</th> <td bgcolor=white> 0.558 </td><td bgcolor=white> <b>0.488</b> </td> <td bgcolor=white> <b>0.491</b> </td> <td bgcolor=white> 0.511 </td> </tr> </table></tbody> (*) measured on Tesla V100 <p align="left"><img src="data/val2017_comparison.png" height="280"\>

Installation

Requirements

optional:

Docker Environment

We provide a Dockerfile to build an environment that meets the above requirements.

# build docker image
$ nvidia-docker build -t yolov3-in-pytorch-image --build-arg UID=`id -u` -f docker/Dockerfile .
# create docker container and login bash
$ nvidia-docker run -it -v `pwd`:/work --name yolov3-in-pytorch-container yolov3-in-pytorch-image
docker@4d69df209f4a:/work$ python train.py --help

Download pretrained weights

download the pretrained file from the author's project page:

$ mkdir weights
$ cd weights/
$ bash ../requirements/download_weights.sh

COCO 2017 dataset:

the COCO dataset is downloaded and unzipped by:

$ bash requirements/getcoco.sh

Inference with Pretrained Weights

To detect objects in the sample image, just run:

$ python demo.py --image data/mountain.png --detect_thresh 0.5 --weights_path weights/yolov3.weights

To run the demo using the non-interactive backend, add --background .

Train

$ python train.py --help
usage: train.py [-h] [--cfg CFG] [--weights_path WEIGHTS_PATH] [--n_cpu N_CPU]
                [--checkpoint_interval CHECKPOINT_INTERVAL]
                [--eval_interval EVAL_INTERVAL] [--checkpoint CHECKPOINT]
                [--checkpoint_dir CHECKPOINT_DIR] [--use_cuda USE_CUDA]
                [--debug] [--tfboard TFBOARD]

optional arguments:
  -h, --help            show this help message and exit
  --cfg CFG             config file. see readme
  --weights_path WEIGHTS_PATH
                        darknet weights file
  --n_cpu N_CPU         number of workers
  --checkpoint_interval CHECKPOINT_INTERVAL
                        interval between saving checkpoints
  --eval_interval EVAL_INTERVAL
                        interval between evaluations
  --checkpoint CHECKPOINT
                        pytorch checkpoint file path
  --checkpoint_dir CHECKPOINT_DIR
                        directory where checkpoint files are saved
  --use_cuda USE_CUDA
  --debug               debug mode where only one image is trained
  --tfboard TFBOARD     tensorboard path for logging

example:

$ python train.py --weights_path weights/darknet53.conv.74 --tfboard log

The train configuration is written in yaml files located in config folder. We use the following format:

MODEL:
  TYPE: YOLOv3
  BACKBONE: darknet53
  ANCHORS: [[10, 13], [16, 30], [33, 23],
            [30, 61], [62, 45], [59, 119],
            [116, 90], [156, 198], [373, 326]] # the anchors used in the YOLO layers
  ANCH_MASK: [[6, 7, 8], [3, 4, 5], [0, 1, 2]] # anchor filter for each YOLO layer
  N_CLASSES: 80 # number of object classes
TRAIN:
  LR: 0.001
  MOMENTUM: 0.9
  DECAY: 0.0005
  BURN_IN: 1000 # duration (iters) for learning rate burn-in
  MAXITER: 500000
  STEPS: (400000, 450000) # lr-drop iter points
  BATCHSIZE: 4 
  SUBDIVISION: 16 # num of minibatch inner-iterations
  IMGSIZE: 608 # initial image size
  LOSSTYPE: l2 # loss type for w, h
  IGNORETHRE: 0.7 # IoU threshold for learning conf
AUGMENTATION: # data augmentation section only for training
  RANDRESIZE: True # enable random resizing
  JITTER: 0.3 # amplitude of jitter for resizing
  RANDOM_PLACING: True # enable random placing
  HUE: 0.1 # random distortion parameter
  SATURATION: 1.5 # random distortion parameter
  EXPOSURE: 1.5 # random distortion parameter
  LRFLIP: True # enable horizontal flip
  RANDOM_DISTORT: False # enable random distortion in HSV space
TEST:
  CONFTHRE: 0.8 # not used
  NMSTHRE: 0.45 # same as official darknet
  IMGSIZE: 416 # this can be changed to measure acc-speed tradeoff
NUM_GPUS: 1

Evaluate COCO AP

$ python train.py --cfg config/yolov3_eval.cfg --eval_interval 1 [--ckpt ckpt_path] [--weights_path weights_path]

TODOs

Paper

YOLOv3: An Incremental Improvement

Joseph Redmon, Ali Farhadi <br>

[Paper] [Original Implementation] [Author's Project Page]

Credit

@article{yolov3,
  title={YOLOv3: An Incremental Improvement},
  author={Redmon, Joseph and Farhadi, Ali},
  journal = {arXiv},
  year={2018}
}