Home

Awesome

TransformerFusion: Monocular RGB Scene Reconstruction using Transformers

Project Page | Paper | Video

<br/>

TransformerFusion: Monocular RGB Scene Reconstruction using Transformers
Aljaz Bozic, Pablo Palafox, Justus Thies, Angela Dai, Matthias Niessner
NeurIPS 2021

demo

<br/>

TODOs

How to install the framework

git clone --recurse-submodules https://github.com/AljazBozic/TransformerFusion.git
conda env create -f environment.yml
conda activate tf
cd csrc
python setup.py install

Evaluate the reconstructions

We evaluate method performance on the test scenes of ScanNet dataset.

We compare scene reconstructions to the ground truth meshes, obtained with fusion of RGB-D data. Since the ground truth meshes are not complete, we additionally compute occlusion masks of RGB-D scans, to not penalize the reconstructions that are more complete than the ground truth meshes.

You can download both ground truth meshes and occlusion masks here. To evaluate the reconstructions, you need to place them into data/reconstructions, and extract the ground truth data to data/groundtruth. The reconstructions are expected to be named as ScanNet test scenes, e.g. scene0733_00.ply. The following script computes evaluation metrics over all provided scene meshes:

conda activate tf
python src/evaluation/eval.py

Citation

If you find our work useful in your research, please consider citing:

@article{
bozic2021transformerfusion,
title={TransformerFusion: Monocular RGB Scene Reconstruction using Transformers},
author={Bozic, Aljaz and Palafox, Pablo and Thies, Justus and Dai, Angela and Niessner, Matthias},
journal={Proc. Neural Information Processing Systems (NeurIPS)},
year={2021}}       

Related work

Some other related work on monocular RGB reconstruction of indoor scenes:

License

The code from this repository is released under the MIT license.