Home

Awesome

MINE: Continuous-Depth MPI with Neural Radiance Fields

Project Page | YouTube | bilibili

PyTorch implementation for our ICCV 2021 paper.<br><br> MINE: Towards Continuous Depth MPI with NeRF for Novel View Synthesis
Jiaxin Li*<sup>1</sup>, Zijian Feng*<sup>1</sup>, Qi She<sup>1</sup>, Henghui Ding<sup>1</sup>, Changhu Wang<sup>1</sup>, Gim Hee Lee<sup>2</sup> <br> <sup>1</sup>ByteDance, <sup>2</sup>National University of Singapore
*denotes equal contribution

Our MINE takes a single image as input and densely reconstructs the frustum of the camera, through which we can easily render novel views of the given scene:

ferngif

The overall architecture of our method:

<img src='resources/pipeline.png'/>

Run training on the LLFF dataset:

Firstly, set up your conda environment:

conda env create -f environment.yml 
conda activate MINE

Download the pre-downsampled version of the LLFF dataset from Google Drive, unzip it and put it in the root of the project, then start training by running the following command:

sh start_training.sh MASTER_ADDR="localhost" MASTER_PORT=1234 N_NODES=1 GPUS_PER_NODE=2 NODE_RANK=0 WORKSPACE=/run/user/3861/vs_tmp DATASET=llff VERSION=debug EXTRA_CONFIG='{"training.gpus": "0,1"}'

You may find the tensorboard logs and checkpoints in the sub-working directory (WORKSPACE + VERSION).

Apart from the LLFF dataset, we experimented on the RealEstate10K, KITTI Raw and the Flowers Light Fields datasets - the data pre-processing codes and training flow for these datasets will be released later.

Running our pretrained models:

We release the pretrained models trained on the RealEstate10K, KITTI and the Flowers datasets:

DatasetNInput ResolutionDownload Link
RealEstate10K32384x256Google Drive
RealEstate10K64384x256Google Drive
KITTI32768x256Google Drive
KITTI64768x256Google Drive
Flowers32512x384Google Drive
Flowers64512x384Google Drive

To run the models, download the checkpoint and the hyper-parameter yaml file and place them in the same directory, then run the following script:

python3 visualizations/image_to_video.py --checkpoint_path MINE_realestate10k_384x256_monodepth2_N64/checkpoint.pth --gpus 0 --data_path visualizations/home.jpg --output_dir .

Citation

If you find our work helpful to your research, please cite our paper:

@inproceedings{mine2021,
  title={MINE: Towards Continuous Depth MPI with NeRF for Novel View Synthesis},
  author={Jiaxin Li and Zijian Feng and Qi She and Henghui Ding and Changhu Wang and Gim Hee Lee},
  year={2021},
  booktitle={ICCV},
}