Home

Awesome

ML3DOP: A Multi-Camera and LiDAR Dataset for 3D Occupancy Perception

News

1. SENSOR SETUP

1.1 Acquisition Platform

Physical drawings and schematics of the ground robot is given below.

<div align=center> <img src="img_readme/car.png" width="800px"> </div> <p align="left">Figure 1. The WHEELTEC ground robot equipped with a lidar and four RGB cameras. The directions of the sensors are marked in different colors, red for X, green for Y and blue for Z. The camera id are named 0~3 from front_right to front_left successively.</p>

1.2 Sensor parameters

All the sensors and their most important parameters are listed as below:

2. DATASET SEQUENCES

We make public ALL THE SEQUENCES with their imgs captured from 4 cameras and lidar_txt/lidar_pcd files now.

All the sequences are acquired in Shandong University, Qingdao, including indoor and outdoor scenes. Dataset link is: ML3DOP

<div align=center> <img src="img_readme/imgs.png" width="800px"> </div> <p align="left">Figure 2. An image frame acquired by four cameras.</p> <div align=center> <img src="img_readme/lidar.png" width="400px"> </div> <p align="left">Figure 3. A lidar frame acquired by lidar.</p>

An overview of ML3DOP is given in the tables below:

Indoor

ScenarioNumberSize of imgs/GBSize of lidar_txt/GBSize of lidar_pcd/GBDuration/s
Canteen317.413.727.3747
Fengyu18.35.19.5261
Huiwen23.35.710.6272
Library15.14.511.9254
Museum210.213.432669
N5418.226.651.81295
N723.75.911.2271
Shoppingmall25.26.211.7312
Zhensheng17.311.621.7580
TOTAL1878.792.7187.74661

Outdoor

ScenarioNumberSize of imgs/GBSize of lidar_txt/GBSize of lidar_pcd/GBDuration/s
Between_zhensheng_and_huagang16.04.78.8186
Dark13.23.26.0155
Dark_library115.112.523.7739
Fengyu_north116.46.813.0423
Huagang-zhensheng_west128.813.725.8794
Museum120.08.417.7788
N1_west-N5_north128.914.827.8837
N5_north-N1_west126.312.222.7661
Playground_south13.62.64.9143
Zhensheng_north120.210.119.2595
Zhensheng_north-N1_north15.52.44.7155
TOTAL11174.091.4174.35476

Our model 3DOPFormer has currently only been trained and tested on indoor scene dataset, the division of the indoor scene dataset is as follows:

trainvaltest
canteen_floor_1huiwen_floor_1N7_floor_1
canteen_floor_2N5_floor_1_northN5_floor_1_south
canteen_floor_3
fengyu
huiwen_floor_2
library_floor_2
museum_floor_2
museum_floor_4
N5_floor_1
N5_floor_2
N7_floor_2
shoppingmall_floor_1
shoppingmall_floor_2
zhensheng

2.1 Indoors

Sequence nameCollection dateSize of imgs/GBSize of lidar_txt/GBSize of lidar_pcd/GBDuration/s
indoor_canteen_floor_12023-03-076.14.810.6254
indoor_canteen_floor_22023-03-075.84.07.5219
indoor_canteen_floor_32023-03-075.54.99.2274
indoor_fengyu2023-03-098.35.19.5261
indoor_huiwen_floor_12023-03-081.52.85.2129
indoor_huiwen_floor_22023-03-081.82.95.4143
indoor_library_floor_22023-03-065.14.511.9254
indoor_museum_floor_22023-03-084.15.413.7266
indoor_museum_floor_42023-03-086.18.018.3403
indoor_N5_floor_12023-03-067.412.123.2596
indoor_N5_floor_22023-03-067.711.622.3577
indoor_N5_floor_1_north2022-12-041.61.62.959
indoor_N5_floor_1_south2022-12-041.51.33.463
indoor_N7_floor_12023-03-071.52.65.0112
indoor_N7_floor_22023-03-072.23.36.2159
indoor_shoppingmall_floor_12023-03-072.72.75.2137
indoor_shoppingmall_floor_22023-03-072.53.56.5175
indoor_zhensheng2023-03-067.311.621.7580

2.2 Outdoors

Sequence nameCollection dateSize of imgs/GBSize of lidar_txt/GBSize of lidar_pcd/GBDuration/s
outdoor_between_zhensheng_and_huagang2023-03-076.04.78.8186
outdoor_dark2023-03-073.23.26.0155
outdoor_dark_library2023-03-0815.112.523.7739
outdoor_fengyu_north2023-03-0916.46.813.0423
outdoor_huagang-zhensheng_west2023-03-0928.813.725.8794
outdoor_museum2023-03-0720.08.417.7788
outdoor_N1_west-N5_north2023-03-0628.914.827.8837
outdoor_N5_north-N1_west2023-03-0626.312.222.7661
outdoor_playground_south2023-03-073.62.64.9143
outdoor_zhensheng_north2023-03-0620.210.119.2595
outdoor_zhensheng_north-N1_north2023-03-075.52.44.7155

3. DEVELOPMENT TOOLKITS

Dependencies

3.1 Extracting lidar data frame from pcap files

As we use the LSC16-[Client] software provided by LeiShen Intelligent Company to acquire lidar data on Windows platform in the form of pcap file, so that we need to extract lidar data frame from these pcap files.

Firstly, we filter out data packages using Wireshark software due to the presence of both data and device packages in the pcap file. The filtered pcap files are provided in our dataset: lidar_pcap_indoor.zip and lidar_pcap_outdoor.zip.

Then, git clone this project, check the parameters in params.yaml, if on Linux platform, run:

bash run_indoor.sh

or

bash run_outdoor.sh

You can change the --path and --out-dir parameter in above .sh files according to the actual situation.

If on Windows platform, run:

python main.py --path your_path --out-dir your_out-dir --config=.\params.yaml

After this operation, we get TXT files/PCD files named as index and time (Beijing).

Output

Full 360° frame store in a file.<br /> All TXT files have the following fields:<br /> Timestamp, Laser_ID, X [m], Y [m], Z [m], Intensity [0-255], Vertical_angle [Angle system], Horizontal_angle [Angle system], Distance [m]

All PCD files have the following fields:<br /> X [m], Y [m], Z [m], Intensity [0-255]

The TXT files and PCD files are provided in our dataset: lidar_indoor_txt.zip, lidar_outdoor_txt.zip, lidar_indoor_pcd.zip, lidar_outdoor_pcd.zip.

Note

This part is based on https://github.com/hitxing/Lidar-data-decode/ which supports LSC32. Actually, this project can support any lidar as long as you change the parameters follow the corresponding technical manual.

3.2 Generating data_all.pkl

For the convenience of using this dataset, we has generated a .pkl file, which stores a data_dict. In this data_dict, imgs_path are classified by img0_key and camera_id, lidar_path are classified by img0_key (img0_key is the imgs_path of camera 0).

If on Linux platform, run:

bash data_pkl.sh

You can change the --imgs_path and --lidar_path parameter in above .sh files according to the actual situation.

If on Windows platform, run:

python data_pkl.py --imgs_path your_imgs_path --lidar_path your_lidar_path

After this operation, we get data_all.pkl. This file is provided in our dataset: data_pkl.zip.

Note

Before running, you need to make sure the machine's time zone is Beijing time zone to get the correct timestamp.

3.3 Calibration

Place the calibration board in front of the camera, we record calibration videos for each camera, then use the Autoware calibration toolbox in the ROS environment to calibrate four cameras separately. Calibration files are provided in our dataset: calibration.zip.

4. LICENSE

This work is licensed under MIT license, which is provided for academic purpose as an international license.