Awesome
MV-NRSfM: High Fidelity 3D Reconstructions with Limited Physical Views (3DV 2021)
<img src="graphics/3dv2021_banner.jpg" width=300>Paper | Project page | Pretrained models | Featured Post
TL;DR Summary
Enforcing <b>Neural "shape" Priors</b> and <b>multi-view equivariance</b> within modern deep 2D-3D lifting enables generation of high-fidelity 3D reconstructions using just <b>2-3 uncalibrated cameras</b>, compared to <b>>100 calibrated cameras</b>.
Overview
<p align="center"> <img width="900" src=graphics/neural_shape_prior.gif> </p>
Requirements
- Tested in
Pytorch 1.11
, withCUDA 11.3
Setup
- Create a conda environment and activate it.
conda env create -f environment.yml conda activate mvnrsfm pip install opencv-python
- Please do a clean install of the submodule
robust_loss_pytorch
:pip install git+https://github.com/jonbarron/robust_loss_pytorch
- Please do a clean install of the submodule
torch_batch_svd
: (if using GPU)cd modules/helpers/torch-batch-svd export CUDA_HOME=/your/cuda/home/directory/ python setup.py install
Pre-trained models
Fetch the pre-trained mv-nrsfm models from Zenodo using:
```
zenodo_get 10.5281/zenodo.7346689
unzip models.zip
rm -rf models.zip && rm -rf md5sums.txt
```
Dataset format
- The final
neural-shape-prior
directory should look like this:
This codebase expects${neural-shape-prior} `-- data `-- Cheetah |-- annot/ |-- images/ `-- Human36M |-- annot/ |-- images/ `-- models
.pkl
files within theannot/
directory. For exact data structure fields within the pickle files, please refer the format similar to MBW-Data.
Demo
We have provided the pretrained models for Human36M (Subject #1, Directions 1 Sequence), Cheetah dataset, and Monkey dataset along with the annotation data in the data
directory.
Please run demo.ipynb
to play around the validation part of Neural Shape Prior based MV-NRSfM. You can either use pretrained models and data given in this repository or plug your own trained models in this demo script. You can visualize the 3D structure as well as 2D predictions (w/ and w/o overlaying on original RGB image). The reconstructed 3D could be visualized via plotly:
Run unit tests
./scripts/unit_tests.sh
Training (Generate 3D labels from MV-NRSfM)
./scripts/train_mvnrsfm.sh
Citation
If you use our code or models in your research, please cite with:
@inproceedings{dabhi2021mvnrsfm,
title={High Fidelity 3D Reconstructions with Limited Physical Views},
author={Dabhi, Mosam and Wang, Chaoyang and Saluja, Kunal and Jeni, Laszlo and Fasel, Ian and Lucey, Simon},
booktitle={2021 International Conference on 3D Vision (3DV)},
year={2021},
ee = {https://ieeexplore.ieee.org/abstract/document/9665845},
organization={IEEE}
}