Awesome
CVTNet
The code for our paper accepted by IEEE Transactions on Industrial Informatics:
CVTNet: A Cross-View Transformer Network for LiDAR-Based Place Recognition in Autonomous Driving Environments.
[IEEE Xplore TII 2023] [arXiv] [Supplementary Materials]
Junyi Ma, Guangming Xiong, Jingyi Xu, Xieyuanli Chen*
<img src="https://github.com/BIT-MJY/CVTNet/blob/main/motivation.png" width="70%"/>CVTNet fuses the range image views (RIVs) and bird's eye views (BEVs) generated from LiDAR data to recognize previously visited places. RIVs and BEVs have the same shift for each yaw-angle rotation, which can be used to extract aligned features.
<img src="https://github.com/BIT-MJY/CVTNet/blob/main/corresponding_rotation.gif" width="70%"/>Table of Contents
Publication
If you use the code in your work, please cite our paper:
@ARTICLE{10273716,
author={Ma, Junyi and Xiong, Guangming and Xu, Jingyi and Chen, Xieyuanli},
journal={IEEE Transactions on Industrial Informatics},
title={CVTNet: A Cross-View Transformer Network for LiDAR-Based Place Recognition in Autonomous Driving Environments},
year={2023},
doi={10.1109/TII.2023.3313635}}
Dependencies
Please refer to our SeqOT repo.
How to Use
[2024-07] We thank Xiongwei Zhao for helping release the utilization on KITTI dataset!
[2023-03] We provide a training and test tutorial for NCLT sequences in this repository. Before any operation, please modify the config file according to your setups.
Data Preparation
1. data preparation for NCLT dataset:
- laser scans from NCLT dataset: [2012-01-08] [2012-02-05]
- pretrained model
- training indexes
- ground truth
You need to generate RIVs and BEVs from raw LiDAR data by
cd tools
python gen_ri_bev.py
2. data preparation for KITTI dataset (provided by Xiongwei):
2.1 You need to generate RIVs and BEVs from raw LiDAR data for train datasets and test datasets by
cd tools
python gen_ri_bev.py
2.2 You need to generate training index for kitti from raw LiDAR data for train datasets by
cd tools
python gen_training_index_kitti.py
2.3 You need to generate ground_truth for kitti by
cd tools
python gen_ground_truth_kitti.py
Training
You can start the training process with
cd train
python ./train_cvtnet.py
Note that we only train our model using the oldest sequence of NCLT dataset (2012-01-08), to prove that our model works well for long time spans even if seeing limited data.
Test
You can test the PR performance of CVTNet by
cd test
python ./test_cvtnet_prepare.py
python ./cal_topn_recall.py
You can also test the yaw-rotation invariance of CVTNet by
cd test
python ./test_yaw_rotation_invariance.py
<img src="https://github.com/BIT-MJY/CVTNet/blob/main/yaw_rotation_invariance.gif" width="70%"/>
It can be seen that the global descriptors generated by CVTNet are not affected by yaw-angle rotation.
C++ Implementation
We provide a toy example showing C++ implementation of CVTNet with libtorch. First, you need to generate the model file by
cd CVTNet_libtorch
python ./gen_libtorch_model.py
- Before building, make sure that PCL exists in your environment.
- Here we use LibTorch for CUDA 11.3 (Pre-cxx11 ABI). Please modify the path of Torch_DIR in CMakeLists.txt.
- For more details of LibTorch installation, please check this website.
Then you can generate a descriptor of the provided 1.pcd by
cd ws
mkdir build
cd build
cmake ..
make -j6
./fast_cvtnet
[2024-10] Test CVTNet on M2DGR
We thank Xianyun Jiao and Jingyi Xu for their efforts to test CVTNet on M2DGR dataset, which was collected in our SJTU campus by Yin et al.. Please contact them if there are any problems.
The related code is available at this link.
<img src="https://github.com/sjtuyinjie/mypics/blob/main/forgithub/outdoor.png" width="70%"/>Sequences of M2DGR dataset
TODO
- Release the preprocessing code and pretrained model
- Release sequence-enhanced CVTNet (SeqCVT)
Related Work
Thanks for your interest in our previous OT series for LiDAR-based place recognition.
- OverlapNet: Loop Closing for 3D LiDAR-based SLAM
@inproceedings{chen2020rss,
author = {X. Chen and T. L\"abe and A. Milioto and T. R\"ohling and O. Vysotska and A. Haag and J. Behley and C. Stachniss},
title = {{OverlapNet: Loop Closing for LiDAR-based SLAM}},
booktitle = {Proceedings of Robotics: Science and Systems (RSS)},
year = {2020}
}
- OverlapTransformer: An Efficient and Yaw-Angle-Invariant Transformer Network for LiDAR-Based Place Recognition
@ARTICLE{ma2022ral,
author={Ma, Junyi and Zhang, Jun and Xu, Jintao and Ai, Rui and Gu, Weihao and Chen, Xieyuanli},
journal={IEEE Robotics and Automation Letters},
title={OverlapTransformer: An Efficient and Yaw-Angle-Invariant Transformer Network for LiDAR-Based Place Recognition},
year={2022},
volume={7},
number={3},
pages={6958-6965},
doi={10.1109/LRA.2022.3178797}}
- SeqOT: A Spatial-Temporal Transformer Network for Place Recognition Using Sequential LiDAR Data
@ARTICLE{ma2022tie,
author={Ma, Junyi and Chen, Xieyuanli and Xu, Jingyi and Xiong, Guangming},
journal={IEEE Transactions on Industrial Electronics},
title={SeqOT: A Spatial-Temporal Transformer Network for Place Recognition Using Sequential LiDAR Data},
year={2022},
doi={10.1109/TIE.2022.3229385}}
License
Copyright 2023, Junyi Ma, Guangming Xiong, Jingyi Xu, Xieyuanli Chen, Beijing Institute of Technology.
This project is free software made available under the MIT License. For more details see the LICENSE file.