Home

Awesome

M2DGR: a Multi-modal and Multi-scenario SLAM Dataset for Ground Robots [RA-L & ICRA2022]

<div align="center">

💎 First Author: Jie Yin 殷杰   📝 [Paper]   ➡️ [Dataset Extension]   ⭐️[Presentation Video]   🔥[News]

Author Paper Preprint Dataset License Video News

</div> <div align=center> <img src="https://github.com/sjtuyinjie/mypics/blob/main/bigsix.jpg" width="800px"> </div> <p align="center">Figure 1. Sample Images</p>

🎯 Notice

We strongly recommend that the newly proposed SLAM algorithm be tested on our M2DGR / M2DGR-plus / Ground-Challenge / SJTU-GVI benchmark, because our data has following features:

  1. Rich sensory information including vision, lidar, IMU, GNSS,event, thermal-infrared images and so on
  2. Various scenarios in real-world environments including lifts, streets, rooms, halls and so on.
  3. Our dataset brings great challenge to existing cutting-edge SLAM algorithms including LIO-SAM and ORB-SLAM3. If your proposed algorihm outperforms these SOTA systems on our benchmark, your paper will be much more convincing and valuable.
  4. 🔥 Extensive excellent open-source projects have been built or evaluated on M2DGR/M2DGE-plus so far, for examples, Ground-Fusion, LVI-SAM-Easyused, SI-LIO, MM-LINS, Log-LIO, LIGO, Swarm-SLAM, VoxelMap++, GRIL-Cali, LINK3d, i-Octree, LIO-EKF, Fast-LIO ROS2, HC-LIO, LIO-RF, PIN-SLAM, LOG-LIO2, Section-LIO, I2EKF-LO, Liloc, BMBL, Light-LOAM and so on!

Table of Contents

  1. 🚩 News & Updates
  2. Introduction
  3. License
  4. Sensor Setup
  5. ⭐️ Dataset Sequences
  6. 📝 Configuration Files
  7. Development Toolkits
  8. Star History
  9. Acknowledgement

[!TIP] Check the table of contents above for a quick overview. And check the below news for lateset updates, especially the list of projects based on M2DGR.

News & Updates

<div align=center> <img src="https://github.com/shuttworth/Record_Datasets_For_LVI-SAM/blob/main/img/gate_01_v1.gif" width="70%"> </div> <div align=center> <img src="https://github.com/shuttworth/Record_Datasets_For_LVI-SAM/blob/main/img/street_08_v1.gif" width="70%"> </div> <p align="center">>LVI-SAM on M2DGR</p>

[!NOTE] If you build your open-source project based on M2DGR or test a cutting-edge SLAM system on M2DGR, please write a issue to remind me of updating your contributions.

INTRODUCTION

ABSTRACT:

We introduce M2DGR: a novel large-scale dataset collected by a ground robot with a full sensor-suite including six fish-eye and one sky-pointing RGB cameras, an infrared camera, an event camera, a Visual-Inertial Sensor (VI-sensor), an inertial measurement unit (IMU), a LiDAR, a consumer-grade Global Navigation Satellite System (GNSS) receiver and a GNSS-IMU navigation system with real-time kinematic (RTK) signals. All those sensors were well-calibrated and synchronized, and their data were recorded simultaneously. The ground truth trajectories were obtained by the motion capture device, a laser 3D tracker, and an RTK receiver. The dataset comprises 36 sequences (about 1TB) captured in diverse scenarios including both indoor and outdoor environments. We evaluate state-of-the-art SLAM algorithms on M2DGR. Results show that existing solutions perform poorly in some scenarios. For the benefit of the research community, we make the dataset and tools public.

Keywords:Dataset, Multi-model, Multi-scenario,Ground Robot

MAIN CONTRIBUTIONS:

VIDEO

ICRA2022 Presentation

For Chinese users, try bilibili

LICENSE

This work is licensed under MIT license. International License and is provided for academic purpose. If you are interested in our project for commercial purposes, please contact us on robot_yinjie@outlook.com for further communication.

If you face any problem when using this dataset, feel free to propose an issue. And if you find our dataset helpful in your research, simply give this project a star. If you use M2DGR in an academic work, please cite:

@article{yin2021m2dgr,
  title={M2dgr: A multi-sensor and multi-scenario slam dataset for ground robots},
  author={Yin, Jie and Li, Ang and Li, Tao and Yu, Wenxian and Zou, Danping},
  journal={IEEE Robotics and Automation Letters},
  volume={7},
  number={2},
  pages={2266--2273},
  year={2021},
  publisher={IEEE}
}
@article{yin2024ground,
  title={Ground-Fusion: A Low-cost Ground SLAM System Robust to Corner Cases},
  author={Yin, Jie and Li, Ang and Xi, Wei and Yu, Wenxian and Zou, Danping},
  journal={arXiv preprint arXiv:2402.14308},
  year={2024}
}

SENSOR SETUP

Acquisition Platform

Physical drawings and schematics of the ground robot is given below. The unit of the figures is centimeter.

<div align=center> <img src="https://github.com/sjtuyinjie/mypics/blob/main/newcar4.png" width="800px"> </div> <p align="left">Figure 2. The GAEA Ground Robot Equipped with a Full Sensor Suite.The directions of the sensors are marked in different colors,red for X,green for Y and blue for Z.</p>

Sensor parameters

All the sensors and track devices and their most important parameters are listed as below:

The rostopics of our rosbag sequences are listed as follows:

DATASET SEQUENCES

We make public ALL THE SEQUENCES with their GT now.

<div align=center> <img src="https://github.com/sjtuyinjie/mypics/blob/main/dynamic.gif" width="600px"> </div> <p align="left">Figure 3. A sample video with fish-eye image(both forward-looking and sky-pointing),perspective image,thermal-infrared image,event image and lidar odometry</p>

An overview of M2DGR is given in the table below:

ScenarioStreetCircleGateWalkHallDoorLiftRoomRoomdarkTOTAL
Number102315243636
Size/GB590.750.665.921.5117.446.0112.145.3171.11220.6
Duration/s79584787822911226588122427586613688
Dist/m7727.72618.03248.40263.17845.15200.14266.27144.13395.6610708.67
Ground TruthRTK/INSRTK/INSRTK/INSRTK/INSLeicaLeicaLeicaMocapMocap---

Outdoors

<div align=center> <img src="https://github.com/sjtuyinjie/mypics/blob/main/forgithub/outdoor.png" width="600px"> <p align="center">Figure 4. Outdoor Sequences:all trajectories are mapped in different colors.</p>
Sequence NameCollection DateTotal SizeDurationFeaturesRosbagGT
gate_012021-07-3116.4g172sdark,around gateRosbagGT
gate_022021-07-3127.3g327sdark,loop backRosbagGT
gate_032021-08-0421.9g283sdayRosbagGT
Sequence NameCollection DateTotal SizeDurationFeaturesRosbagGT
Circle_012021-08-0323.3g234sCircleRosbagGT
Circle_022021-08-0727.3g244sCircleRosbagGT
Sequence NameCollection DateTotal SizeDurationFeaturesRosbagGT
street_012021-08-0675.8g1028sstreet and buildings,night,zigzag,long-termRosbagGT
street_022021-08-0383.2g1227sday,long-termRosbagGT
street_032021-08-0621.3g354snight,back and fourth,full speedRosbagGT
street_042021-08-0348.7g858snight,around lawn,loop backRosbagGT
street_052021-08-0427.4g469snight,staight lineRosbagGT
street_062021-08-0435.0g494snight,one turnRosbagGT
street_072021-08-0677.2g929sdawn,zigzag,sharp turnsRosbagGT
street_082021-08-0631.2g491snight,loop back,zigzagRosbagGT
street_092021-08-0783.2g907sday,zigzagRosbagGT
street_0102021-08-0786.2g910sday,zigzagRosbagGT
walk_012021-08-0421.5g291sday,back and fourthRosbagGT
</div>

Indoors

<div align=center> <img src="https://github.com/sjtuyinjie/mypics/blob/main/forgithub/lift.jpg" width="600px"> <p align="left">Figure 5. Lift Sequences:The robot hang around a hall on the first floor and then went to the second floor by lift.A laser scanner track the trajectory outside the lift</p>
Sequence NameCollection DateTotal SizeDurationFeaturesRosbagGT
lift_012021-08-0418.4g225sliftRosbagGT
lift_022021-08-0443.6g488sliftRosbagGT
lift_032021-08-1522.3g252sliftRosbagGT
lift_042021-08-1527.8g299sliftRosbagGT
Sequence NameCollection DateTotal SizeDurationFeaturesRosbagGT
hall_012021-08-0129.1g351srandon walkRosbagGT
hall_022021-08-0815.0g128srandon walkRosbagGT
hall_032021-08-0820.5g164srandon walkRosbagGT
hall_042021-08-1517.7g181srandon walkRosbagGT
hall_052021-08-1535.1g402scircleRosbagGT
<img src="https://github.com/sjtuyinjie/mypics/blob/main/forgithub/room.png" width="600px"> <p align="center">Figure 6. Room Sequences:under a Motion-capture system with twelve cameras.</p>
Sequence NameCollection DateTotal SizeDurationFeaturesRosbagGT
room_012021-07-3014.0g72sroom,brightRosbagGT
room_022021-07-3015.2g75sroom,brightRosbagGT
room_032021-07-3026.1g128sroom,brightRosbagGT
room_dark_012021-07-3020.2g111sroom,darkRosbagGT
room_dark_022021-07-3030.3g165sroom,darkRosbagGT
room_dark_032021-07-3022.7g116sroom,darkRosbagGT
room_dark_042021-08-1529.3g143sroom,darkRosbagGT
room_dark_052021-08-1533.0g159sroom,darkRosbagGT
room_dark_062021-08-1535.6g172sroom,darkRosbagGT
</div>

Alternative indoors and outdoors

<div align=center> <img src="https://github.com/sjtuyinjie/mypics/blob/main/forgithub/door.jpg" width="600px"> <p align="center">Figure 7. Door Sequences:A laser scanner track the robot through a door from indoors to outdoors.</p>
Sequence NameCollection DateTotal SizeDurationFeaturesRosbagGT
door_012021-08-0435.5g461soutdoor to indoor to outdoor,long-termRosbagGT
door_022021-08-0410.5g127soutdoor to indoor,short-termRosbagGT
</div>

CONFIGURATION FILES

For convenience of evaluation, we provide configuration files of some well-known SLAM systems as below:

A-LOAM, LeGO-LOAM, LINS, LIO-SAM, VINS-MONO, ORB-Pinhole, ORB-Fisheye, ORB-Thermal, and CUBMAPSLAM.

Furthermore, a quantity of cutting-edge SLAM systems have been tested on M2DGR by lovely users. Here are the configuration files for ORB-SLAM2, ORB-SLAM3, VINS-Mono,DM-VIO, A-LOAM, Lego-LOAM, LIO-SAM, LVI-SAM, LINS, FastLIO2, Fast-LIVO, Faster-LIO and hdl_graph_slam. Welcome to test! If you have more configuration files, please contact me and I will post it on this website ~

DEVELOPEMENT TOOLKIT

Extracting Images

roscd image_view
rosmake image_view
sudo apt-get install mjpegtools

open a terminal,type roscore.And then open another,type

rosrun image_transport republish compressed in:=/camera/color/image_raw raw out:=/camera/color/image_raw respawn="true"

Evaluation

We use open-source tool evo for evalutation. To install evo,type

pip install evo --upgrade --no-binary evo

To evaluate monocular visual SLAM,type

evo_ape tum street_07.txt your_result.txt -vaps

To evaluate LIDAR SLAM,type

evo_ape tum street_07.txt your_result.txt -vap

To test GNSS based methods,type

evo_ape tum street_07.txt your_result.txt -vp

Calibration

For camera intrinsics,visit Ocamcalib for omnidirectional model. visit Vins-Fusion for pinhole and MEI model. use Opencv for Kannala Brandt model

For IMU intrinsics,visit Imu_utils

For extrinsics between cameras and IMU,visit Kalibr For extrinsics between Lidar and IMU,visit Lidar_IMU_Calib For extrinsics between cameras and Lidar, visit Autoware

Getting RINEX files

For GNSS based methods like RTKLIB, we usually need to get data in the format of RINEX. To make use of GNSS raw measurements, we use Link toolkit.

ROS drivers for UVC cameras

We write a ROS driver for UVC cameras to record our thermal-infrared image. UVC ROS driver

Star History

Star History Chart

ACKNOWLEGEMENT

This work is supported by NSFC(62073214). Authors from SJTU hereby express our appreciation.