Home

Awesome

CCPD (Chinese City Parking Dataset, ECCV)

UPdate on 10/03/2019. CCPD Dataset is now updated. We are confident that images in subsets of CCPD is much more challenging than before with over 300k images and refined annotations.

(If you are benefited from this dataset, please cite our paper.) It can be downloaded from and extract by (tar xf CCPD2019.tar.xz):

train\val\test split

The split file is available under 'split/' folder.

Images in CCPD-Base is split to train/val set. Sub-datasets (CCPD-DB, CCPD-Blur, CCPD-FN, CCPD-Rotate, CCPD-Tilt, CCPD-Challenge) in CCPD are exploited for test.


UPdate on 16/09/2020. We add a new energy vehicle sub-dataset (CCPD-Green) which has an eight-digit license plate number.

It can be downloaded from:

metric

As each image in CCPD contains only a single license plate (LP). Therefore, we do not consider recall and concerntrate on precision. Detectors are allowed to predict only one bounding box for each image.

benchmark

If you want to provide more baseline results or have problems about the provided results. Please raise an issue.

detection
FPSAPDBBlurFNRotateTiltChallenge
Faster-RCNN1184.9866.7381.5976.4594.4288.1989.82
SSD3002586.9972.9087.0674.8496.5391.8690.06
SSD5121287.8369.9984.2380.6596.5091.2692.14
YOLOv3-3205287.2371.3482.1982.4496.6989.1791.46
recognition

We provide baseline methods for recognition by appending a LP recognition model Holistic-CNN (HC) (refer to paper 'Holistic recognition of low quality license plates by cnn using track annotated data') to the detector.

FPSAPDBBlurFNRotateTiltChallenge
SSD512+HC1143.4234.4725.8345.2452.8252.0444.62

The column 'AP' shows the precision on all the test set. The test set contains six parts: DB(ccpd_db/), Blur(ccpd_blur), FN(ccpd_fn), Rotate(ccpd_rotate), Tilt(ccpd_tilt), Challenge(ccpd_challenge).

This repository is designed to provide an open-source dataset for license plate detection and recognition, described in 《Towards End-to-End License Plate Detection and Recognition: A Large Dataset and Baseline》. This dataset is open-source under MIT license. More details about this dataset are avialable at our ECCV 2018 paper (also available in this github) 《Towards End-to-End License Plate Detection and Recognition: A Large Dataset and Baseline》. If you are benefited from this paper, please cite our paper as follows:

@inproceedings{xu2018towards,
  title={Towards End-to-End License Plate Detection and Recognition: A Large Dataset and Baseline},
  author={Xu, Zhenbo and Yang, Wei and Meng, Ajin and Lu, Nanxue and Huang, Huan},
  booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
  pages={255--271},
  year={2018}
}

Specification of the categorise above:

Demo

Demo code and several images are provided under rpnet/ folder, after you obtain "fh02.pth" by downloading or training, run demo as follows, the demo code will modify images in rpnet/demo folder and you can check by opening demo images.


  python demo.py -i [ROOT/rpnet/demo/] -m [***/fh02.pth]

The nearly well-trained model for testing and fun (Short of time, trained only for 5 epochs, but enough for testing):

We encourage the comparison with SOTA detector like FCOS rather than RPnet as the architecture of RPnet is very old fashioned.

Training instructions

Input parameters are well commented in python codes(python2/3 are both ok, the version of pytorch should be >= 0.3). You can increase the batchSize as long as enough GPU memory is available.

Enviorment (not so important as long as you can run the code):

For convinence, we provide a trained wR2 model and a trained rpnet model, you can download them from google drive or baiduyun.

First train the localization network (we provide one as before, you can download it from google drive or baiduyun) defined in wR2.py as follows:


  python wR2.py -i [IMG FOLDERS] -b 4

After wR2 finetunes, we train the RPnet (we provide one as before, you can download it from google drive or baiduyun) defined in rpnet.py. Please specify the variable wR2Path (the path of the well-trained wR2 model) in rpnet.py.


  python rpnet.py -i [TRAIN IMG FOLDERS] -b 4 -se 0 -f [MODEL SAVE FOLDER] -t [TEST IMG FOLDERS]

Test instructions

After fine-tuning RPnet, you need to uncompress a zip folder and select it as the test directory. The argument after -s is a folder for storing failure cases.


  python rpnetEval.py -m [MODEL PATH, like /**/fh02.pth] -i [TEST DIR] -s [FAILURE SAVE DIR]

Dataset Annotations

Annotations are embedded in file name.

A sample image name is "025-95_113-154&383_386&473-386&473_177&454_154&383_363&402-0_0_22_27_27_33_16-37-15.jpg". Each name can be splited into seven fields. Those fields are explained as follows.

provinces = ["皖", "沪", "津", "渝", "冀", "晋", "蒙", "辽", "吉", "黑", "苏", "浙", "京", "闽", "赣", "鲁", "豫", "鄂", "湘", "粤", "桂", "琼", "川", "贵", "云", "藏", "陕", "甘", "青", "宁", "新", "警", "学", "O"]
alphabets = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'J', 'K', 'L', 'M', 'N', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W',
             'X', 'Y', 'Z', 'O']
ads = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'J', 'K', 'L', 'M', 'N', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X',
       'Y', 'Z', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'O']

Acknowledgement

If you have any problems about CCPD, please contact detectrecog@gmail.com.

Please cite the paper 《Towards End-to-End License Plate Detection and Recognition: A Large Dataset and Baseline》, if you benefit from this dataset.