Home

Awesome

Update History

1. Introduction

In this repo, we release a new One-Stage Anchor-Free Detector named LFD. LFD completely surpasses the previous LFFD in most aspects. We are trying to make object detection easier, explainable and more applicable. With LFD, you are able to train and deploy a desired model without all the bells and whistles. Eventually, we hope LFD can be as popular as YOLO series for the industrial community in the future.

1.1 New Features

Compared to LFFD, LFD has the following features:

1.2 Performance Highlights

Before dive into the code, we present some performance results on two datasets, including precision and inference latency.

Dataset 1: WIDERFACE (single-class)

Accuracy on val under the SIO evaluation schema proposed in LFFD
Model VersionEasy SetMedium SetHard Set
v20.8750.8630.754
WIDERFACE-L0.8870.8960.863
WIDERFACE-M0.8740.8880.855
WIDERFACE-S0.8730.8850.849
WIDERFACE-XS0.8660.8770.839
Inference latency

Platform: RTX 2080Ti, CUDA 10.2, CUDNN 8.0.4, TensorRT 7.2.2.3

Model Version640×4801280×7201920×10803840×2160
v22.12ms(472.04FPS)5.02ms(199.10FPS)10.80ms(92.63FPS)42.41ms(23.58FPS)
WIDERFACE-L2.67ms(374.19FPS)6.31ms(158.38FPS)13.51ms(74.04FPS)94.61ms(10.57FPS)
WIDERFACE-M2.47ms(404.23FPS)5.70ms(175.38FPS)12.28ms(81.43FPS)87.90ms(11.38FPS)
WIDERFACE-S1.82ms(548.42FPS)3.57ms(280.00FPS)7.35ms(136.02FPS)27.93ms(35.81FPS)
WIDERFACE-XS1.58ms(633.06FPS)3.03ms(330.36FPS)6.14ms(163.00FPS)23.26ms(43.00FPS)

the results of v2 is directly get from LFFD, the Platform condition is slightly different from here.

Model Version640×4801280×7201920×10803840×2160
WIDERFACE-L1.68ms(594.12FPS)3.69ms(270.78FPS)7.66ms(130.51FPS)28.65ms(34.90FPS)
WIDERFACE-M1.61ms(622.42FPS)3.51ms(285.13FPS)7.31ms(136.79FPS)27.32ms(36.60FPS)
WIDERFACE-S1.26ms(793.97FPS)2.39ms(418.68FPS)4.88ms(205.09FPS)18.46ms(54.18FPS)
WIDERFACE-XS1.23ms(813.01FPS)2.18ms(459.17FPS)4.57ms(218.62FPS)17.35ms(57.65FPS)

It can be observed that FP16 mode is evidently faster than FP32 mode. So in deployment, FP16 is highly recommended if possible.

Model Version640×4801280×7201920×10803840×2160
WIDERFACE-L1.50ms(667.95FPS)3.24ms(308.43FPS)6.83ms(146.41FPS)-ms(-FPS)
WIDERFACE-M1.45ms(689.00FPS)3.15ms(317.60FPS)6.61ms(151.20FPS)-ms(-FPS)
WIDERFACE-S1.17ms(855.29FPS)2.14ms(466.86FPS)4.40ms(227.18FPS)-ms(-FPS)
WIDERFACE-XS1.09ms(920.91FPS)2.03ms(493.54FPS)4.11ms(243.15FPS)-ms(-FPS)

CAUTION: - means results are not available due to out of memory while calibrating

Dataset 2: TT100K (multi-class----45 classes)

Precision&Recall on test set of TT100K[1]
Model VersionPrecisionRecall
FastRCNN in [1]0.50140.5554
Method proposed in [1]0.87730.9065
LFD_L0.92050.9129
LFD_S0.92020.9042

We use only train split (6105 images) for model training, and test our models on test split (3071 images). In [1], authors extended the training set: Classes with between 100 and 1000 instances in the training set were augmented to give them 1000 instances. But the augmented data is not released. That means we use much less data than [1] used for training. However, as you can see, our models can still achieve better performance. Precision&Recall results of [1] can be found in it's released code folder: code/results/report_xxxx.txt.

Inference latency

Platform: RTX 2080Ti, CUDA 10.2, CUDNN 8.0.4, TensorRT 7.2.2.3

Model Version1280×7201920×10803840×2160
LFD_L9.87ms(101.35FPS)21.56ms(46.38FPS)166.66ms(6.00FPS)
LFD_S4.31ms(232.27FPS)8.96ms(111.64FPS)34.01ms(29.36FPS)
Model Version1280×7201920×10803840×2160
LFD_L6.28ms(159.27FPS)13.09ms(76.38FPS)49.79ms(20.09FPS)
LFD_S3.03ms(329.68FPS)6.27ms(159.54FPS)23.41ms(42.72FPS)
Model Version1280×7201920×10803840×2160
LFD_L5.96ms(167.89FPS)12.68ms(78.86FPS)-ms(-FPS)
LFD_S2.90ms(345.33FPS)5.89ms(169.86FPS)-ms(-FPS)

CAUTION: - means results are not available due to out of memory while calibrating

2. Get Started

2.1 Install

Prerequirements

All above versions are tested, newer versions may work as well but not fully tested.

Build Internal Libs

In the repo root, run the code below:

python setup.py build_ext

Once successful, you will see: ----> build and copy successfully!

if you want to know what libs are built and where they are copied, you can read the file setup.py.

Build External Libs

Add PYTHONPATH

The last step is to add the repo root to PYTHONPATH. You have two choices:

  1. permanent way: append export PYTHONPATH=[repo root]:$PYTHONPATH to the file ~/.bashrc
  2. temporal way: whenever you want to code with the repo, add the following code ahead:
    1. import sys
    2. sys.path.append('path to the repo')

Until now, the repo is ready for use. By the way, we do not install the repo to the default python libs location (like /python3.x/site-packages/) for easily modification and development.

Docker Installation

please check here for more details, thanks to @ashuezy.

2.2 Play with the code

We present the details of how to use the code in two specific tasks.

Besides, we describe the structure of code in wiki.

Acknowledgement

Citation

If you find the repo is useful, please cite the repo website directly.