Home

Awesome

PEDRo: an Event-based Dataset for Person Detection in Robotics

The PEDRo events dataset is specifically designed for person detection in service robotics. The dataset has been collected by using a moving DAVIS346 event camera in a wide variety of scenarios and lighting conditions.

The dataset is composed of:

This dataset focuses on people, making it a relevant addition to other existing event-based datasets that tackle the person detection task. The PEDRo dataset can be downloaded here.

Citations

If you use PEDRo dataset, please cite our paper. The code is available under the BSD-2-Clause License.

@inproceedings{bbpprsPedro2023,
      title={PEDRo: an Event-based Dataset for Person Detection in Robotics},
      author={Boretti, Chiara and Bich, Philippe and Pareschi, Fabio and Prono, Luciano and Rovatti, Riccardo and Setti, Gianluca},
      booktitle={IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)},
      month=jun,
      year={2023}
}
  1. C. Boretti, P. Bich, F. Pareschi, L. Prono, R. Rovatti, and G. Setti. "PEDRo: an Event-based Dataset for Person Detection in Robotics", in 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), June 2023
  2. Chiara Boretti, Philippe Bich, Fabio Pareschi, Luciano Prono, Riccardo Rovatti, and Gianluca Setti. PEDRo: an Event-based Dataset for Person Detection in Robotics. In IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), June 2023

Details on the dataset

The dataset contains people from 20 to 70 years old, recorded in a wide variety of scenarios and metereological conditions (sunny, snowy and rainy), during day and night. The scenarios are both outdoor and indoor, ranging from mountains, lakes and seafronts to offices and houses. The main part of the people in the dataset are walking, but there are also example of people standing and sitting.

<p align="center"> <img src="assets/single_person.gif" width="400" height="301"> <img src="assets/two_people.gif" width="400" height="301"> </p>

Dataset collection and labeling

The dataset has been recorded by using a DAVIS346 event camera which outputs simultaneously events and grayscale frames. The camera has been hand-carried to capture the events and the height of the sensor varies among recordings. The dataset has been manually labeled by the authors using the grayscale images.

<p align="center"> <img src="assets/four_sae.gif" width="450" height="366"/> </p>

The dataset is composed of 119 recordings and it has been split in train, validation and test sets. Every recording belongs entirely to one of these three groups. The 43 259 bounding boxes have been divided in 34 243 (79.2%) in train, 4372 (10.1%) in validation and 4179 (9.7%) in test. In particular, all the bounding boxes are contained in 27 000 samples. Each sample is the stream of events collected in a time interval of 40 ms, time determined by the acquisition rate of the grayscale images used for the manual labeling process.

Dataset format

The dataset is organized as follows:

Utils

The jupyter notebook Visualize_dataset.ipynb can be used to see some examples of the recordings, organized as Surface of Active Events (SAE), with the corresponding labels. Some test samples are in the example folder.