Home

Awesome

Quatro

Official page of "A Single Correspondence Is Enough: Robust Global Registration to Avoid Degeneracy in Urban Environments", which is accepted @ ICRA'22. NOTE that this repository is the re-implementation, so it is not exactly the same as the original version.

[Video] [Priprint Paper]

Demo

NEWS (May 21 2024)

Now, Quatro is fully supported by TEASER++ library. Those who want a ROS-free version should visit that site, please :)

NEWS (Jan. 27 2023)

Characteristics

// After the declaration of Quatro,
quatro.setInputSource(srcMatched);
quatro.setInputTarget(tgtMatched);
Eigen::Matrix4d output;
quatro.computeTransformation(output);
KITTI datasetNAVER LABS Loc. dataset

Contributors

ToDo


Contents

  1. Test Env.
  2. How to Build
  3. How to Run Quatro
  4. Citation

Test Env.

The code is tested successfully at

How to Build

ROS Setting

  1. Install the following dependencies
sudo apt install cmake libeigen3-dev libboost-all-dev
  1. Install ROS on a machine.
  2. Then, build Quatro package and enjoy! :) We use [catkin tools](https://catkin-tools.readthedocs.io/en/latest/)
mkdir -p ~/catkin_ws/src
cd ~/catkin_ws/src
git clone git@github.com:url-kaist/Quatro.git
cd ~/catkin_ws
catkin build quatro 

Note Quatro requires pmc library, which is automatically installed via 3rdparty/find_dependencies.cmake.

How to Run Quatro

Prerequisites

In this study, the fast point feature histogram (FPFH) is utilized, which is widely used as a conventional descriptor for the registration. However, the original FPFH for a 3D point cloud captured by a 64-channel LiDAR sensor takes tens of seconds, which is too slow. In summary, still, feature extraction & matching is the bottleneck for global registration :worried: (in fact, the accuracy is not very important because Quatro is extremely robust against outliers!).

For this reason, we employ voxel-sampled FPFH, which is preceded by voxel-sampling. This is followed by the correspondence test. In addition, we employ Patchwork, which is the state-of-the-art method for ground segmentation, and image projection to reject some subclusters, which is proposed in Le-GO-LOAM. These modules are not presented in our paper!

Finally, we can reduce the computational time of feature extraction & matching, i.e. the front-end of global registration, from tens of seconds to almost 0.2 sec. The overall pipeline is as follows:

Note

For fine-tuning of the parameters to use this code in your own situations, please refer to config folder. In particular, for fine-tuning of Patchwork, please refer to this Wiki

TL;DR

  1. Download toy pcd bins files

The point clouds are from the KITTI dataset, so these are captured by Velodyne-64-HDE

Toy pcds are automatically downloaded. If there is a problem, follow the below commands:

roscd quatro
cd materials
wget https://urserver.kaist.ac.kr/publicdata/quatro/000540.bin
wget https://urserver.kaist.ac.kr/publicdata/quatro/001319.bin
  1. Launch the roslaunch file as follows:
OMP_NUM_THREADS=4 roslaunch quatro quatro.launch

(Unfortunately, for the first run, it shows a rather slow and imprecise performance. It may be due to multi-thread issues.)

Visualized inner pipelinesSource (red), target (green), and the estimated output (blue)

Citation

If our research has been helpful, please cite the below papers:

@article{lim2024quatro++,
  title={Quatro++: Robust global registration exploiting ground segmentation for loop closing in LiDAR SLAM},
  author={Lim, Hyungtae and Kim, Beomsoo and Kim, Daebeom and Mason Lee, Eungchang and Myung, Hyun},
  journal={The International Journal of Robotics Research},
  volume={43},
  number={5},
  pages={685--715},
  year={2024}
}
@article{lim2022quatro,
    title={A Single Correspondence Is Enough: Robust Global Registration to Avoid Degeneracy in Urban Environments},
    author={Lim, Hyungtae and Yeon, Suyong and Ryu, Suyong and Lee, Yonghan and Kim, Youngji and Yun, Jaeseong and Jung, Euigon and Lee, Donghwan and Myung, Hyun},
    booktitle={Proc. IEEE Int. Conf. Robot. Autom.},
    pages={8010--8017},
    year={2022}
    }
@article{lim2021patchwork,
    title={Patchwork: Concentric Zone-based Region-wise Ground Segmentation with Ground Likelihood Estimation Using a 3D LiDAR Sensor},
    author={Lim, Hyungtae and Minho, Oh and Myung, Hyun},
    journal={IEEE Robot. Autom. Lett.},
    volume={6},
    number={4},
    pages={6458--6465},
    year={2021},
    }

Acknowledgment

This work was supported by the Industry Core Technology Development Project, 20005062, Development of Artificial Intelligence Robot Autonomous Navigation Technology for Agile Movement in Crowded Space, funded by the Ministry of Trade, Industry & Energy (MOTIE, Republic of Korea) and by the research project “Development of A.I. based recognition, judgment and control solution for autonomous vehicle corresponding to atypical driving environment,” which is financed from the Ministry of Science and ICT (Republic of Korea) Contract No. 2019-0-00399. The student was supported by the BK21 FOUR from the Ministry of Education (Republic of Korea).

License

<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License</a>.

Copyright