Home

Awesome

Neural SLAM Evaluation Benchmark

This repo contains evaluation code for Co-SLAM: Joint Coordinate and Sparse Parametric Encodings for Neural Real-Time SLAM, a neural SLAM method that perform real-time camera tracking and dense reconstruction based on a joint encoding.

Project Page | Paper

Co-SLAM: Joint Coordinate and Sparse Parametric Encodings for Neural Real-Time SLAM <br /> Hengyi Wang*, Jingwen Wang*, Lourdes Agapito <br /> CVPR 2023

<p align="center"> <a href=""> <img src="./media/coslam_teaser.gif" alt="Logo" width="80%"> </a> </p>

In this repo we also provide a comprehensive comparison for existing open-sourced RGB-D Neural SLAM methods under the same evaluation protocol. We hope this will benefit the research in the area of Neural SLAM.

<!-- TABLE OF CONTENTS --> <details open="open" style='padding: 10px; border-radius:5px 30px 30px 5px; border-style: solid; border-width: 1px;'> <summary>Table of Contents</summary> <ol> <li> <a href="#installation">Installation</a> </li> <li> <a href="#datasets">Datasets</a> </li> <li> <a href="#evaluation-protocol">Evaluation Protocol</a> </li> <li> <a href="#run-evaluation">Run Evaluation</a> </li> <li> <a href="#benchmark">Benchmark</a> </li> <li> <a href="#acknowledgement">Acknowledgement</a> </li> <li> <a href="#citation">Citation</a> </li> </ol> </details>

Installation

This repo assumes you already configured the environment from Co-SLAM main repository. You then also need the following dependencies:

You can install those dependencies by running:

conda activate coslam
pip install -r requirements.txt

Datasets

Following iMAP and NICE-SLAM we evaluate our method on Replica, ScanNet and TUM RGB-D datasets. We perform further experiments on the synthetic dataset from NeuralRGBD which contains many thin structures and simulates the noise present in real depth sensor measurements. We provided the download links in our main repo.

In addition to those sequences and the apartment sequences captured by the authors of NICE-SLAM, we also collected our own real-world indoor scenes (MyRoom) using RealSense D435i depth camera, which is more popular among robotics community and whose depth quality is slightly worse than Azure Kinect. You can download MyRoom sequences (~15G) from here.

Evaluation Protocol

Mesh Culling

As stated in Section 1.2 of our supplementary material, for neural SLAM methods mesh culling is required for evaluating the reconstruction quality due to the extrapolation ability of neural networks. This extrapolation property brings the hole-filling ability to neural SLAM methods but also could potentially produce unwanted artefacts outside the region of interest (ROI). Ideally we want a culling strategy that could remove unwanted part outside the ROI and leave all other parts unchanged.

Existing culling methods used in neural SLAM/reconstruction systems are either based on frustum (NICE-SLAM and iMAP) or frustum+occlusion (Neural-RGBD and GO-Surf) strategy. The first might leave artefacts outside the ROI (such as artefacts behind the wall) while the second remove the occluded parts inside the ROI. In Co-SLAM, we propose to further use frustum+oclusion+virtual cameras that introduces extra virtual views that cover the occluded parts inside the region of interest. Please refer to Section 1.2 of our supplementary material for more explanation and details.

We provide our culling script that subsumes all three culling strategies mentioned above in case you want to follow the other two protocols. Here is an example usage:

INPUT_MESH=output/Replica/office0/mesh_final.ply
python cull_mesh.py --config configs/Replica/office0.yaml --input_mesh $INPUT_MESH --remove_occlusion --virtual_cameras --gt_pose  # Co-SLAM strategy
python cull_mesh.py --config configs/Replica/office0.yaml --input_mesh $INPUT_MESH --remove_occlusion --gt_pose  # Neural-RGBD/GO-Surf strategy
python cull_mesh.py --config configs/Replica/office0.yaml --input_mesh $INPUT_MESH --gt_pose  # iMAP/NICE-SLAM strategy

Command line arguments

Note that to use Co-SLAM culling strategy you need virtual camera views. We provide data required to evaluate Co-SLAM on both Replica and Neural-RGBD dataset.

Virtual Camera Views

The purpose of virtual views is just to cover regions that might not be observed by existing views, so their selection is very flexible. To give you a flavour of how it can be done, here we also provide a simple example Python script to create virtual cameras for Replica sequences in an interactive fashion:

python create_virtual_cameras_replica.py --config configs/Replica/office0.yaml --data_dir data/Replica/office0

It will first create TSDF-Fusion mesh with GT poses. Once the mesh is created an Open3D window will pop up, you can adjust the view-point using your mouse to cover the unobserved regions. Press . button on the keyboard to save the view-point.

Run Evaluation

Reconstruction

To evaluate reconstruction quality, first download the data needed for evaluation:

which contains virtual camera views, GT camera meshes culled with those virtual views using our proposed culling strategy (for 3D-metric) and unseen points on the GT meshes (for 2D-metric). For reproducibility purpose we also included the sampled 1000 camera poses for 2D evaluation.

<scene_name>           
├── virtual_cameras             # virtual cameras
    ├── 0.txt     
    ├── 0.png
    ├── 1.txt
    ├── 1.png
    ...
├── sampled_poses_1000.npz      # sampled 1000 camrera poses
├── gt_pc_unseen.npy            # point cloud of unseen part
├── gt_unseen.ply               # mesh of unseen part
├── gt_mesh_cull_virt_cams.ply  # culled ground-truth mesh

Then run the culling script to cull the reconstructed mesh

# Put your own path to reconstructed mesh. Here is just an example
INPUT_MESH=output/Replica/office0/mesh_final.ply
VIRT_CAM_PATH=eval_data/Replica/office0/virtual_cameras
python cull_mesh.py --config configs/Replica/office0.yaml --input_mesh $INPUT_MESH --remove_occlusion --virtual_cameras --virt_cam_path $VIRT_CAM_PATH --gt_pose  # Co-SLAM strategy

Once you've got the culled reconstructed mesh, the evaluation follows similar pipeline as iMAP/NICE-SLAM.

REC_MESH=output/Replica/office0/mesh_final_cull_virt_cams.ply
GT_MESH=eval_data/Replica/office0/gt_mesh_cull_virt_cams.ply)
python eval_recon.py python --rec_mesh $REC_MESH --gt_mesh $GT_MESH --dataset_type Replica -2d -3d

Tracking

We follow exactly the same evaluation protocol for evaluating average trajectory error (ATE). Please refer to NICE-SLAM or our main repo for more details.

Benchmark

In this section we compare other methods on reconstruction quality, tracking accuracy and performance analysis. All performance analysis were conducted on the same computing platform: a desktop PC with a 3.60GHz Intel Core i7-12700K CPU and a single NVIDIA RTX 3090ti GPU. To rule out the effect of method-dependent implementation details such as data loading, different multi-processing strategy, we only report the time needed to perform tracking/mapping iterations and the corresponding FPS. We also report total time needed to process each sequence in individual dataset page under each section.

Replica

MethodsAcc↓<br/>[cm]Comp↓<br/>[cm]Comp<br/>Ratio↑<br/>[%]Depth <br/>L1↓<br/>[cm]Track.↓<br/>[ms x it]Map.↓<br/>[ms x it]Track.<br/>FPS↑Map.<br/>FPS↑#param↓
iMAP3.624.9380.514.6416.8x644.8x109.922.230.26M
NICE-SLAM2.372.6491.131.907.8x1082.5x6013.700.2017.4M
Vox-Fusion1.882.5690.932.9115.8x3046.0x102.112.170.87M
ESLAM2.181.7596.460.946.9x818.4x1518.113.629.29M
Co-SLAM2.102.0893.441.515.8x109.8x1017.2410.200.26M

Here tracking/mapping FPS indicates how fast a complete tracking/mapping optimization cycle can run, thus do not correspond to the actual runtime FPS of the system. For the overall system runtime we report the total time needed to process an entire sequence in benchmark/replica. Also note that on Replica mapping happens ~every 5 frames for iMAP*, NICE-SLAM and Co-SLAM. Vox-Fusion adopts different multi-processing strategy and mapping is performed as frequently as possible.

Please refer to benchmark/replica for more details and breakdown of each scene.

SyntheticRGBD

MethodsAcc↓<br/>[cm]Comp↓<br/>[cm]Comp<br/>Ratio↑<br/>[%]Depth <br/>L1↓<br/>[cm]Track.↓<br/>[ms x it]Map.↓<br/>[ms x it]Track.<br/>FPS↑Map.<br/>FPS↑#param↓
iMAP*18.2926.4120.7347.2231.0x5049.1x3000.640.070.22M
NICE-SLAM5.955.3077.466.3212.3x1050.4x608.130.333.11M
Vox-Fusion4.104.8181.786.1316.6x3046.2x102.002.160.84M
Co-SLAM2.952.9686.883.026.4x1010.4x1015.639.620.26M

Here tracking/mapping FPS indicates how fast a complete tracking/mapping optimization cycle can run, thus do not correspond to the actual runtime FPS of the system. For the overall system runtime we report the total time needed to process an entire sequence in benchmark/rgbd. All experiments are done with Replica seetings of each method.

Please refer to benchmark/rgbd for more details of each scene.

ScanNet

MethodsATE↓<br/>[cm]ATE↓<br/>w/o align<br/>[cm]Track.↓<br/>[ms x it]Map.↓<br/>[ms x it]Track.<br/>FPS↑Map.<br/>FPS↑#param↓
iMAP*36.67-30.4x5044.9x3000.660.070.2M
NICE-SLAM9.6323.9712.3x50125.3x601.630.1310.3M
Vox-Fusion8.22-29.4x3085.8x151.130.781.1M
ESLAM7.42-7.4x3022.4x304.541.4910.5M
Co-SLAM9.3718.017.8x1020.2x1012.824.950.8M
Co-SLAM†8.75-7.8x2020.2x106.414.950.8M

Here tracking/mapping FPS indicates how fast a complete tracking/mapping optimization cycle can run, thus do not correspond to the actual runtime FPS of the system. For the overall system runtime we report the total time needed to process an entire sequence in benchmark/scannet. Also note that on ScanNet mapping happens ~every 5 frames for iMAP*, NICE-SLAM and Co-SLAM. Vox-Fusion adopts different multi-processing strategy and mapping is performed as frequently as possible.

Please refer to benchmark/scannet for more details of each scene.

TUM-RGBD

MethodsATE↓<br/>[cm]Track.↓<br/>[ms x it]Map.↓<br/>[ms x it]Track.<br/>FPS↑Map.<br/>FPS↑#param
iMAP4.23-----
iMAP*6.1029.6x20044.3x3000.170.080.2M
NICE-SLAM2.5047.1x200189.2x600.110.09101.6M
Co-SLAM2.407.5x1019.0x2013.332.631.6M
Co-SLAM†2.177.5x2019.0x206.672.631.6M

Here tracking/mapping FPS indicates how fast a complete tracking/mapping optimization cycle can run, thus do not correspond to the actual runtime FPS of the system. For the overall system runtime we report the total time needed to process an entire sequence in benchmark/tum. Also note that on TUM-RGBD mapping happens ~every frame for NICE-SLAM and iMAP*, and ~every 5 frames for Co-SLAM.

Please refer to benchmark/tum for more details of each scene.

Acknowledgement

This repository adapted codes from some awesome repositories, including NICE-SLAM, NeuralRGBD and GO-Surf. Thanks for making the code available. We also thank Zihan Zhu of NICE-SLAM, Edgar Sucar of iMAP for quick response of details of their methods.

The research presented here has been supported by a sponsored research award from Cisco Research and the UCL Centre for Doctoral Training in Foundational AI under UKRI grant number EP/S021566/1. This project made use of time on Tier 2 HPC facility JADE2, funded by EPSRC (EP/T022205/1).

Citation

If you find our code/work useful in your research or wish to refer to the benchmark results, please consider citing the following:

@article{wang2023co-slam,
  title={Co-SLAM: Joint Coordinate and Sparse Parametric Encodings for Neural Real-Time SLAM},
  author={Wang, Hengyi and Wang, Jingwen and Agapito, Lourdes},
  journal={arXiv preprint arXiv:2304.14377},
  year={2023}
}

@inproceedings{wang2022go-surf,
  author={Wang, Jingwen and Bleja, Tymoteusz and Agapito, Lourdes},
  booktitle={2022 International Conference on 3D Vision (3DV)},
  title={GO-Surf: Neural Feature Grid Optimization for Fast, High-Fidelity RGB-D Surface
  Reconstruction},
  year={2022},
  pages = {433-442},
  organization={IEEE}
}

Contact

Contact Hengyi Wang and Jingwen Wang for questions and reporting bugs.