Home

Awesome

This is not an officially supported Google product.

DynIBaR: Neural Dynamic Image-Based Rendering

Project Page

Implementation for CVPR 2023 paper (best paper honorable mention)

DynIBaR: Neural Dynamic Image-Based Rendering, CVPR 2023<br>

Zhengqi Li<sup>1</sup>, Qianqian Wang<sup>1,2</sup>, Forrester Cole<sup>1</sup>, Richard Tucker<sup>1</sup>, Noah Snavely<sup>1</sup> <br><br> <sup>1</sup>Google Research, <sup>2</sup>Cornell Tech, Cornell University
<br>

Instructions for installing dependencies

Python Environment

The following codebase was successfully run with Python 3.8 and CUDA 11.3. We suggest installing the library in a virtual environment such as Anaconda.

To install required libraries, run:
conda env create -f enviornment_dynibar.yml

To install softmax splatting for preprocessing, clone and install the library from here.

To measure LPIPS, copy "models" folder from NSFF, and put it in the code root directory.

Evaluation on Nvidia Dynamic scene dataset.

Downloading data and pretrained checkpoint

We include pretrained checkpoints that can be accessed by running:

wget https://storage.googleapis.com/gresearch/dynibar/nvidia_checkpoints.zip
unzip nvidia_checkpoints.zip

put the unzipped "checkpoints" folder in the code root directory.

Each scene in the Nvidia dataset can be accessed here

The input data directory should similar to the following format: xxx/nvidia_long_release/Balloon1

Run the following command for each scene to obtain reported quantitative results:

  # Usage: In txt file, You need to change "rootdir" to your code root directory,
  # and "folder_path" to input data directory, and make sure "coarse_dir" points to
  # "checkpoints" folder you unzip.
  python eval_nvidia.py --config configs_nvidia/eval_balloon1_long.txt

Note: It will take ~8 hours to evaluate each scene with 4x Nvidia A100 GPUs.

Training/rendering on monocular videos.

Required inputs and corresponding folders or files:

We provide a template input data for the NSFF example video, which can be downloaded here

The input data directory should be in the following format: xxx/release/kid-running/dense/***

For your own video, you need to include the following folders to run training.

To train the model:

  # Usage: config is config txt file for training video
  # make sure "rootdir" is your code root directory,
  # "folder_path" is your input data directory path,
  # "train_scenes" is your folder name.
  # For example, if data is in xxx/release/kid-running/dense/, then "train_scenes" is 
  # "xxx/release/", "train_scenes" is "kid-running"
  python train.py \
  --config configs/train_kid-running.txt

Hyperparameters in config txt file you might need to know for training a good model on in-the-wild videos

The tensorboard includes rendering visualization as shown below.

<img src="images/tensorboard.png" width = "640" height = "" align=center />

To render the model:

  # Usage: config is config txt file for training video,
  # please make sure expname in txt is the saved folder name in 'out' directory
  python render_monocular_bt.py \
  --config configs/test_kid-running.txt

Contact

For any questions related to our paper and implementation, please send email to zhengqili@google.com.

Citation

@InProceedings{Li_2023_CVPR,
    author    = {Li, Zhengqi and Wang, Qianqian and Cole, Forrester and Tucker, Richard and Snavely, Noah},
    title     = {DynIBaR: Neural Dynamic Image-Based Rendering},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2023},
    pages     = {4273-4284}
}