Awesome
MMPD-Dataset
MMPD Dataset is proposed in ECCV'2024 "When Pedestrian Detection Meets Multi-Modal Learning: Generalist Model and Benchmark Dataset".
Authors: Yi Zhang, Wang ZENG, Sheng Jin, Chen Qian, Ping Luo, Wentao Liu <img src="./images/dataset.png"/> <img src="./images/distribution.png"/>
Data Preparation
MMPD is built upon the 2D detection datasets, including COCO, CrowdHuman, Object365, LLVIP, InOutDoor, STCrowd, PEDRo, FLIR, EventPed, In order to use MMPD, please download images from the original dataset websites first, then reorganize the data and use our provided annotation files Google Drive or BaiDu Yun (Code: mmpd) for training and testing.
After preparing images and annotations, the project should look like this:
── mmpedestron_datasets
│── mmpedestron_datasets_ann
│ │-- crowdhuman_coco/annotation_train_full2coco_231020.json
│ |-- LLVIP/ann_coco/LLVIP_coco_train_change_cat_id.json
│ |-- PEDRo_events_dataset/coco_ann/pedro_train.json
│ │-- ...
│── mmpedestron_images
│-- COCO
│-- CrowdHuman
│-- Object365
│-- LLVIP
│-- InOutDoor
│-- STCrowd
│-- ...
Data Processing
Please obtain the data processing script from the following repo: MMPedestron
STCrowd Lidar2RGB
cd MMPedestron
python tools/datasets_converters/stcrowd_pointcloud2cam.py
PEDRo events dataset Event2RGB
cd MMPedestron
python tools/datasets_converters/multi_process_evs_handler.py
Terms of Use
- MMPD-dataset is ONLY for research and non-commercial use.
- MMPD dataset consists of multiple existing public datasets (COCO, CrowdHuman, Object365, LLVIP, InOutDoor, STCrowd, PEDRo, FLIR), which are not our property. We do not own the copyright of the images. We are not responsible for the content nor the meaning of these images.
- MMPD dataset also contains one newly proposed dataset (EventPed). EventPed dataset is freely available for free non-commercial use, and may be redistributed under these conditions. The images and annotations of EventPed dataset belong to SenseTime Research. For commercial queries, please contact Mr. Sheng Jin (jinsheng13[at]foxmail[dot]com). We will send the detail agreement to you.
Citation
@inproceedings{zhang2024when,
title={When Pedestrian Detection Meets Multi-Modal Learning: Generalist Model and Benchmark Dataset},
author={Zhang, Yi and Zeng, Wang and Jin, Sheng and Qian, Chen and Luo, Ping and Liu, Wentao},
booktitle={European Conference on Computer Vision (ECCV)},
year={2024},
month={September}
}