Awesome
M2DGR-plus: Extension and update of M2DGR, a novel Multi-modal and Multi-scenario SLAM Dataset for Ground Robots (ICRA2022 & ICRA2024)
<div align="center">First Author: Jie Yin ę®·ę° ā š [Paper] / [Arxiv] ā šÆ [M2DGR Dataset] ā āļø [Presentation Video] ā š„[News]
</div> <div align=center> <img src="./fig/car2.jpg" width="800px"> </div> <p align="center">Figure 1. Acquisition Platform and Diverse Scenarios.</p>News & Updates
-
š„
2024/10/11
: Introducing M2DGR-benchmark, benchmarking newest SOTA LiDAR-visual SLAM alrogithms on both M2DGR and M2DGR-plus! -
2024/07/15
: Introducing a list of LiDAR-Visual SLAM systems at awesome-LiDAR-Visual-SLAM, wheel-based SLAM systems at awesome-wheel-slam, and Isaac Sim resources at awesome-isaac-sim (keep updating)
This dataset is based on M2DGR. And the algorithm code is Ground-Fusion. The preprint version of this paper is arxiv.
1.LICENSE
This work is licensed under GPL-3.0 license. International License and is provided for academic purpose. If you are interested in our project for commercial purposes, please contact us on robot_yinjie@outlook.com for further communication.
If you use this work in an academic work, please cite:
@article{yin2021m2dgr,
title={M2dgr: A multi-sensor and multi-scenario slam dataset for ground robots},
author={Yin, Jie and Li, Ang and Li, Tao and Yu, Wenxian and Zou, Danping},
journal={IEEE Robotics and Automation Letters},
volume={7},
number={2},
pages={2266--2273},
year={2021},
publisher={IEEE}
}
@INPROCEEDINGS{yin2024ground,
author={Yin, Jie and Li, Ang and Xi, Wei and Yu, Wenxian and Zou, Danping},
booktitle={2024 IEEE International Conference on Robotics and Automation (ICRA)},
title={Ground-Fusion: A Low-cost Ground SLAM System Robust to Corner Cases},
year={2024},
volume={},
number={},
pages={8603-8609},
keywords={Location awareness;Visualization;Simultaneous localization and mapping;Accuracy;Wheels;Sensor fusion;Land vehicles},
doi={10.1109/ICRA57147.2024.10610070}}
2.SENSOR SETUP
The calibration results are here. All the sensors and track devices and their most important parameters are listed as below:
- LIDAR Robosense 16, 360 Horizontal Field of View (FOV),-30 to +10 vertical FOV,5Hz,Max Range 200 m,Range Resolution 3 cm, Horizontal Angular Resolution 0.2Ā°.
- GNSS Ublox F9p, GPS/BeiDou/Glonass/Galileo, 1Hz
- V-I Sensor,Realsense d435i,RGB/Depth 640*480,69H-FOV,42.5V-FOV,15Hz;IMU 6-axix, 200Hz
- IMU,wheeltec,9-axis,100Hz;
- GNSS-IMU Xsens Mti 680G. GNSS-RTK,localization precision 2cm,100Hz;IMU 9-axis,100 Hz;
- Motion-capture System Vicon Vero 2.2, localization accuracy 1mm, 50 Hz;
The rostopics of our rosbag sequences are listed as follows:
-
3D LIDAR:
/rslidar_points
-
2D LIDAR:
/scan
-
Odom:
/odom
-
GNSS Ublox F9p:
/ublox_driver/ephem
,
/ublox_driver/glo_ephem
,
/ublox_driver/range_meas
,
/ublox_driver/receiver_lla
,
/ublox_driver/receiver_pvt
-
V-I Sensor:
/camera/color/image_raw
,
/camera/imu
-
IMU:
/imu
3.DATASET SEQUENCES
Sequence Name | Collection Date | Total Size | Duration | Features | Rosbag |
---|---|---|---|---|---|
Anomaly | 2023-8 | 1.5g | 57s | wheel anomaly | Rosbag |
Switch | 2023-8 | 9.5g | 292s | indoor-outdoor switch | Rosbag |
Tree | 2023-8 | 3.7g | 160s | Dense tree leave cover | Rosbag |
Bridge_01 | 2022-11 | 2.4g | 75s | Bridge, Zigzag | Rosbag |
Bridge_02 | 2022-11 | 16.0g | 501s | Bridge, Long-term,Straight line | Rosbag |
Street_01 | 2022-11 | 1.7g | 58s | Street, Straight line | Rosbag |
Street_02 | 2022-11 | 3.9g | 126s | Bridge, Sharp turn | Rosbag |
Parking_01 | 2022-11 | 3.3g | 105s | Parking lot, Side moving | Rosbag |
Parking_02 | 2022-11 | 5.4g | 149s | Parking lot, Rectangle loop | Rosbag |
Building_01 | 2022-11 | 3.7g | 120s | Building, Far features | Rosbag |
Building_02 | 2022-11 | 3.4g | 110s | Building, Far features | Rosbag |
4. EXPERIMENTAL RESULTS
We test methods with diverse senser settings to validate our benchmark dataset. Results shown that our dataset is a valid and effective testfield for localization methods.
And in some cases, our Ground-Fusion achieves comparable performance to Lidar SLAM!
<div align=center> <img src="./fig/resultf.png" width="800px"> </div> <p align="center">Figure 2. The ATE RMSE (m) result on some sequences.</p> <div align=center> <img src="./fig/result.png" width="800px"> </div> <p align="center">Figure 3. The visualized trajectory.</p>5. Configuration Files
We provide configuration files for several cutting-edge baseline methods, including VINS-RGBD,TartanVO,VINS-Mono and VIW-Fusion and GVINS.