Awesome
Ground-Challenge
A Multi-sensor SLAM Dataset Focusing on Corner Cases for Ground Robots
<div align=center> <img src="fig/scenarios.jpg" width="800px"> </div> <p align="center">Figure 1. Different corner cases for SLAM</p>Notice:
We strongly recommend that the newly proposed SLAM algorithm be tested on our Ground-Challenge benchmark, because our data has following features:
-
A rich pool of sensory information including RGBD, wheel, IMU and so on.
-
This benchmark includes diverse corner cases such as aggressive motion, severe occlusion, changing illumination, few textures, pure rotation, motion blur, wheel suspension, etc.
-
This benchmark brings great challenge to existing cutting-edge SLAM algorithms including VINS-Mono, ORB-SLAM3, VINS-RGBD, VIW-Fusion and TartanVO. If your proposed algorihm outperforms SOTA systems on this dataset, your paper will be much more convincing and valuable.
License
The paper link is here.If you use Ground-Challenge in an academic work, please cite:
@inproceedings{yin2023ground,
title={Ground-challenge: A multi-sensor slam dataset focusing on corner cases for ground robots},
author={Yin, Jie and Yin, Hao and Liang, Conghui and Jiang, Haitao and Zhang, Zhengyou},
booktitle={2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)},
pages={1--5},
year={2023},
organization={IEEE}
}
ABSTRACT:
We introduce Ground-Challenge: a novel dataset collected by a ground robot with multiple sensors including an RGB-D camera, an inertial measurement unit (IMU), a wheel odometer and a 3D LiDAR to support the research on corner cases of visual SLAM systems. Our dataset comprises 36 trajectories with diverse corner cases such as aggressive motion, severe occlusion, changing illumination, few textures, pure rotation, motion blur, wheel suspension, etc. Some state-of-the-art SLAM algorithms are tested on our dataset, showing that these systems are seriously drifting and even failing on specific sequences. We will release the dataset and relevant materials upon paper publication to benefit the research community.
1.SENSOR SETUP
1.1 Acquisition Platform
The ground robot is given below. The unit of the figures is centimeter.
<div align=center> <img src="fig/robot.jpg" width="600px"> </div> <p align="left">Figure 2. The data capture robot.</p>1.2 Sensor parameters
All the sensors and track devices and their most important parameters are listed as below:
-
LIDAR Velodyne VLP-16, 360 Horizontal Field of View (FOV),-30 to +10 vertical FOV,10Hz,Max Range 200 m,Range Resolution 3 cm, Horizontal Angular Resolution 0.2°.
-
V-I Sensor,Realsense d435i,RGB/Depth 640*480,69H-FOV,42.5V-FOV,15Hz;IMU 6-axix, 200Hz
-
IMU,Xsens Mti-300,9-axis,400Hz;
-
Wheel Odometer,AgileX,2D,25Hz;
The rostopics of our rosbag sequences are listed as follows:
-
LIDAR:
/velodyne_points
-
V-I Sensor:
/camera/color/image_raw
,
/camera/depth/image_raw
,
/camera/imu
-
IMU:
/imu/data
-
Wheel Odometer:
/odom
2.DATASET SEQUENCES
An overview of Ground-Challenge is given in the table below:
Scenario | Darkroom | Occlusion | Office | Room | Wall | Motionblur | Hall | Loop | Roughroad | Corridor | Rotation | Static | Slope | TOTAL |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Number | 3 | 4 | 3 | 3 | 3 | 3 | 3 | 2 | 3 | 2 | 3 | 2 | 2 | 36 |
Dist/m | 92.0 | 273.8 | 75.5 | 102.1 | 86.7 | 166.6 | 236.3 | 371.8 | 68.1 | 164.3 | 12.4 | 1.9 | 128.5 | 1780.0 |
Duration/s | 203.6 | 334.2 | 164.0 | 154.7 | 189.3 | 145.5 | 302.4 | 332.7 | 186.3 | 198.1 | 183.2 | 92.6 | 195.0 | 2681.6 |
Size/GB | 6.1 | 9.9 | 4.7 | 4.6 | 5.6 | 4.3 | 8.7 | 9.9 | 5.4 | 5.8 | 5.4 | 2.7 | 5.7 | 78.8 |
2.1 Visual Challenges
<div align=center>Sequence Name | Total Size | Duration | Features | Rosbag |
---|---|---|---|---|
Darkroom1 | 2.9g | 100s | slight light, going into a room | Rosbag |
Darkroom2 | 2.3g | 76s | sharp turn | Rosbag |
Darkroom3 | 1.9g | 64s | slight light | Rosbag |
Occlusion1 | 2.9g | 97s | moving feet, far features | Rosbag |
Occlusion2 | 3.2g | 108s | hand occlusion | Rosbag |
Occlusion3 | 2.6g | 89s | hand occlusion | Rosbag |
Occlusion4 | 1.2g | 40s | complete occlusion | Rosbag |
Office1 | 1.3g | 46s | exposure change | Rosbag |
Office2 | 1.9g | 66s | going into a dark room | Rosbag |
Office3 | 1.5g | 52s | office | Rosbag |
Room1 | 1.3g | 46s | exposure change | Rosbag |
Room2 | 1.9g | 66s | going into a dark room | Rosbag |
Room3 | 1.5g | 52s | office | Rosbag |
Motionblur1 | 1.5g | 52s | aggressive motion | Rosbag |
Motionblur2 | 1.6g | 54s | aggressive motion | Rosbag |
Motionblur3 | 1.2g | 40s | aggressive motion | Rosbag |
Wall1 | 1.7g | 59s | wall in a corridor | Rosbag |
Wall2 | 2.0g | 66s | wall in a big hall | Rosbag |
Wall3 | 3.9g | 65s | wall in a corridor | Rosbag |
2.2 Wheel Challenge
<div align=center>Sequence Name | Total Size | Duration | Features | Rosbag |
---|---|---|---|---|
Hall1 | 2.6g | 91s | slippery ground, a reflective surface | Rosbag |
Hall2 | 3.2g | 110s | slippery ground, a reflective surface | Rosbag |
Hall3 | 2.9g | 101s | slippery ground, walking human | Rosbag |
Loop1 | 4.1g | 97s | moving feet, far features | Rosbag |
Loop2 | 5.8g | 137s | hand occlusion | Rosbag |
Roughroad1 | 2.2g | 75s | rough road | Rosbag |
Roughroad2 | 1.5g | 52s | rough road | Rosbag |
Roughroad3 | 1.8g | 59s | rough road | Rosbag |
2.3 Specific Movement Patterns
<div align=center>Sequence Name | Total Size | Duration | Features | Rosbag |
---|---|---|---|---|
Corridor1 | 2.9g | 100s | zigzag, long corridor | Rosbag |
Corridor2 | 2.9g | 98s | straight forward, long corridor | Rosbag |
Rotation1 | 1.6g | 53s | moving feet, far features | Rosbag |
Rotation2 | 2.1g | 73s | hand occlusion | Rosbag |
Rotation3 | 1.7g | 57s | rough road | Rosbag |
Static1 | 1.6g | 56s | rough road | Rosbag |
Static2 | 1.1g | 37s | rough road | Rosbag |
Slope1 | 2.8g | 96s | slope | Rosbag |
Slope2 | 2.9g | 99s | slope | Rosbag |
3. Configuration Files
We provide configuration files for several cutting-edge baseline methods, including VINS-RGBD,TartanVO,VINS-Mono and VIW-Fusion.