Awesome
LVIO-SAM
A multi-sensor fusion odometry, LVIO-SAM, which fuses LiDAR, stereo camera and inertial measurement unit (IMU) via smoothing and mapping.
- The code is still being integrated. we will release it in the feature.
LVIO-SAM
A multi-sensor fusion odometry, LVIO-SAM, which fuses LiDAR, stereo camera and inertial measurement unit (IMU) via smoothing and mapping.
<!-- PROJECT LOGO --> <br /> <p align="center"> <a href="https://github.com/TurtleZhong/LVIO-SAM"> <img src="images/lvio-sam-kitti.gif" alt="Logo" width="80%"> </a> <a href="https://github.com/TurtleZhong/LVIO-SAM"> <img src="images/lvio-sam-cmu.gif" alt="Logo" width="80%"> </a> <h3 align="center">LVIO-SAM</h3> <p align="center"> A multi-sensor fusion odometry, LVIO-SAM, which fuses LiDAR, stereo camera and inertial measurement unit (IMU) via smoothing and mapping. <!-- A basic implementation of <a href="https://arxiv.org/abs/2007.01813">AVP-SLAM: Semantic Visual Mapping and Localization for Autonomous Vehicles in the Parking Lot(IROS 2020)</a> in simulation! --> <br /> <a href="https://www.youtube.com/playlist?list=PLBBuFHQF08z4xjS1IwlQQE0rfKYv4aBD1">Demo Youtube</a> · <a href="https://www.bilibili.com/video/BV1Hq4y1S7aL?share_source=copy_web">Demo Bilibili</a> · <a href="https://github.com/TurtleZhong/LVIO-SAM/issues">Report Bug</a> · <a href="https://github.com/TurtleZhong/LVIO-SAM">Request Feature</a> </p> </p> <!-- TABLE OF CONTENTS --> <details open="open"> <summary>Table of Contents</summary> <ol> <li> <a href="#about-the-project">About The Project</a> </li> <li><a href="#simulations environment">Simulations environment</a></li> <li><a href="#how to run">How to run</a></li> <li><a href="#roadmap">Roadmap</a></li> <li><a href="#contributing">Contributing</a></li> <li><a href="#license">License</a></li> <li><a href="#contact">Contact</a></li> <li><a href="#acknowledgements">Acknowledgements</a></li> </ol> </details> <!-- ABOUT THE PROJECT -->About The Project
This project is provide a a multi-sensor fusion odometry, LVIO-SAM, which fuses LiDAR,stereo camera and inertial measurement unit (IMU) via smoothing and mapping.
!!!Important Notes!!!
Simulations environment
We modify the Gazebo world proposed in here and adding our own sensors to test our proposed method. We use Husky as the base robot and we modify the urdf. The robot is equiped with A velodyne VLP 16 lidar, stereo camera(640x480) and an IMU (50Hz).
Download the CMU campus model to sim_env/husky_gazebo/mesh/
cd YOUR_WORD_PATH/LVIO_SAM/sim_env/husky_gazebo/mesh/
unzip autonomus_exploration_environments.zip
I guess c campus
model it to ~/.gazebo/models/
.
cd autonomus_exploration_environments/
cp -r campus ~/.gazebo/models/
you can launch gazebo and find campu model to check if it is OK.
git clone https://github.com/TurtleZhong/LVIO-SAM.git
cd YOUR_PATH/LVIO-SAM
catkin build -DCMAKE_BUILD_TYPE=Release
source devel/setup.bash
roslaunch husky_gazebo husky_campus.launch
It will take a few minutes to load the world. please start a new terminal and launch the husky and sensor model.
roslaunch husky_gazebo spawn_husky.launch
If everything is OK, you will get this:
<p align="center"> <a href=""> <img src="images/cmu_campus_gazebo_ros.png" alt="[Logo]" width="100%"> </a> </p>if you want control the robot, you can use the keyboard i,j,k,l etc.
rosrun teleop_twist_keyboard teleop_twist_keyboard.py
How to run in Docker
Since our code is still being integrated. we will release it in the feature. But we provide a docker environment for users. So Docker should be correctly installed.
Step1. Prepare Datasets
- KITTI datasets
wget https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_30_drive_0027/2011_09_30_drive_0027_sync.zip
wget https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_30_drive_0027/2011_09_30_drive_0027_extract.zip
wget https://s3.eu-central-1.amazonaws.com/avg-kitti/raw_data/2011_09_30_calib.zip
unzip 2011_09_30_drive_0084_sync.zip
unzip 2011_09_30_drive_0084_extract.zip
unzip 2011_09_30_calib.zip
python kitti2bag.py -t 2011_09_30 -r 0027 raw_synced .
That's it. You have a bag that contains your data.
╰─$ rosbag info kitti_2011_09_30_drive_0027_synced.bag
path: kitti_2011_09_30_drive_0027_synced.bag
version: 2.0
duration: 1:55s (115s)
start: Sep 30 2011 12:40:25.07 (1317357625.07)
end: Sep 30 2011 12:42:20.41 (1317357740.41)
size: 6.0 GB
messages: 35278
compression: none [4435/4435 chunks]
types: geometry_msgs/TwistStamped [98d34b0043a2093cf9d9345ab6eef12e]
sensor_msgs/CameraInfo [c9a58c1b0b154e0e6da7578cb991d214]
sensor_msgs/Image [060021388200f6f0f447d0fcd9c64743]
sensor_msgs/Imu [6a62c6daae103f4ff57a132d6f95cec2]
sensor_msgs/NavSatFix [2d3a8cd499b9b4a0249fb98fd05cfa48]
sensor_msgs/PointCloud2 [1158d486dd51d683ce2f1be655c3c181]
topics: /gps/fix 1106 msgs : sensor_msgs/NavSatFix
/gps/vel 1106 msgs : geometry_msgs/TwistStamped
/imu_correct 11556 msgs : sensor_msgs/Imu
/imu_raw 11556 msgs : sensor_msgs/Imu
/kitti/camera_color_left/camera_info 1106 msgs : sensor_msgs/CameraInfo
/kitti/camera_color_left/image_raw 1106 msgs : sensor_msgs/Image
/kitti/camera_color_right/camera_info 1106 msgs : sensor_msgs/CameraInfo
/kitti/camera_color_right/image_raw 1106 msgs : sensor_msgs/Image
/kitti/camera_gray_left/camera_info 1106 msgs : sensor_msgs/CameraInfo
/kitti/camera_gray_left/image_raw 1106 msgs : sensor_msgs/Image
/kitti/camera_gray_right/camera_info 1106 msgs : sensor_msgs/CameraInfo
/kitti/camera_gray_right/image_raw 1106 msgs : sensor_msgs/Image
/points_raw 1106 msgs : sensor_msgs/PointCloud2
Other source files can be found at KITTI raw data page.
- sim_env datasets
You can record datasets from our simulation environments or download the sample dataset from BaiduYun Link, the extract code is f8to
.
Get docker images and create your own datasets..
docker pull xinliangzhong/ubuntu-18.04-novnc-lvio-sam:v1
use docker images
check the image is ok.
docker run -it --rm -p 8080:80 xinliangzhong/ubuntu-18.04-novnc-lvio-sam:v1
then open the Chrome browser and type http://127.0.0.1:8080/
open 3 terminal and run
cd /root
source .bashrc
cd work/ws_lvio/
source devel/setup.bash
roslaunch husky_gazebo husky_campus.launch
roslaunch husky_gazebo husky_campus.launch
It will take a few minutes to load the world. please start a new terminal and launch the husky and sensor model.
roslaunch husky_gazebo spawn_husky.launch
roslaunch husky_viz view_robot.launch
If everything is OK, you will get this in your chrome browser:
<p align="center"> <a href=""> <img src="images/http_docker_view.png" alt="[Logo]" width="90%"> </a> </p>Run LVIO-SAM in docker
Follow the above steps to get the docker image, and open it in browser:
<p align="center"> <a href=""> <img src="images/http_docker_lvio-sam.png" alt="[Logo]" width="90%"> </a> </p>cd /root
source .bashrc
cd work/ws_lvio/
source devel/setup.bash
roslaunch lvio_sam run_kitti_debug_test_vo_between_factor.launch #for kitti dataset.
roslaunch lvio_sam run_kitti_debug_test_vo_between_factor.launch #for sim dataset.
we prepare 2 sample bag in the docker, you can use it directly.
rosbag play kitti_2011_09_30_drive_0027_synced.bag --pause --clock #for kitti dataset.
rosbag play 2021-08-04-09-49-56.bag --pause --clock #for sim dataset.
If everything is OK, you will get this in your chrome browser:
<p align="center"> <a href=""> <img src="images/http_docker_lvio-sam-kitti.png" alt="[Logo]" width="90%"> </a> </p> <!-- ROADMAP -->Roadmap
<!-- CONTRIBUTING -->Contributing
Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated.
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
License
Distributed under the MIT License.
<!-- CONTACT -->Contact
Xinliang Zhong - @zxl - xinliangzhong@foxmail.com
Project Link: https://github.com/TurtleZhong/LVIO-SAM
Citation
@inproceedings{zhong2021lvio,
title={LVIO-SAM: A Multi-sensor Fusion Odometry via Smoothing and Mapping},
author={Zhong, Xinliang and Li, Yuehua and Zhu, Shiqiang and Chen, Wenxuan and Li, Xiaoqian and Gu, Jason},
booktitle={2021 IEEE International Conference on Robotics and Biomimetics (ROBIO)},
pages={440--445},
year={2021},
organization={IEEE}
}
<!-- ACKNOWLEDGEMENTS -->