Home

Awesome

TiNeuVox: Time-Aware Neural Voxels

ACM SIGGRAPH Asia 2022

Project Page | ACM Paper | Arxiv Paper | Video

Fast Dynamic Radiance Fields with Time-Aware Neural Voxels
Jiemin Fang<sup>1,2*</sup>, Taoran Yi<sup>2*</sup>, Xinggang Wang<sup>✉2</sup>, Lingxi Xie<sup>3</sup>, </br>Xiaopeng Zhang<sup>3</sup>, Wenyu Liu<sup>2</sup>, Matthias Nießner<sup>4</sup>, Qi Tian<sup>3</sup>
<sup>1</sup>Institute of AI, HUST   <sup>2</sup>School of EIC, HUST   <sup>3</sup>Huawei Cloud   <sup>4</sup>TUM


block
Our method converges very quickly. This is a comparison between D-NeRF (left) and our method (right).

block We propose a radiance field framework by representing scenes with time-aware voxel features, named as TiNeuVox. A tiny coordinate deformation network is introduced to model coarse motion trajectories and temporal information is further enhanced in the radiance network. A multi-distance interpolation method is proposed and applied on voxel features to model both small and large motions. Our framework significantly accelerates the optimization of dynamic radiance fields while maintaining high rendering quality. Empirical evaluation is performed on both syntheticand real scenes. Our TiNeuVox completes training with only 8 minutes and 8-MB storage cost while showing similar or even better rendering performance than previous dynamic NeRF methods.

Notes

Requirements

Data Preparation

For synthetic scenes:
The dataset provided in D-NeRF is used. You can download the dataset from dropbox. Then organize your dataset as follows.

├── data_dnerf 
│   ├── mutant
│   ├── standup 
│   ├── ...

For real dynamic scenes:
The dataset provided in HyperNeRF is used. You can download scenes from Hypernerf Dataset and organize them as Nerfies.

Training

For training synthetic scenes such as standup, run

python run.py --config configs/nerf-*/standup.py 

Use small for TiNeuVox-S and base for TiNeuVox-B. Use --render_video to render a video.

For training real scenes such as vrig_chicken, run

python run.py --config configs/vrig_dataset/chicken.py  

Evaluation

Run the following script to evaluate the model.

For synthetic ones:

python run.py --config configs/nerf-small/standup.py --render_test --render_only --eval_psnr --eval_lpips_vgg --eval_ssim 

For real ones:

python run.py --config configs/vrig_dataset/chicken.py --render_test --render_only --eval_psnr

To fairly compare with values reported in D-NeRF, metric.py is provided to directly evaluate the rendered images with uint8 values.

Main Results

Please visit our video for more rendered videos.

Synthetic Scenes

Methodw/Time Enc.w/Explicit Rep.TimeStoragePSNRSSIMLPIPS
NeRF∼ hours5 MB19.000.870.18
DirectVoxGO5 mins205 MB18.610.850.17
Plenoxels6 mins717 MB20.240.870.16
T-NeRF∼ hours29.510.950.08
D-NeRF20 hours4 MB30.500.950.07
TiNeuVox-S (ours)8 mins8 MB30.750.960.07
TiNeuVox-B (ours)28 mins48 MB32.670.970.04

Real Dynamic Scenes

MethodTimePSNRMS-SSIM
NeRF∼ hours20.10.745
NV∼ hours16.90.571
NSFF∼ hours26.30.916
Nerfies∼ hours22.20.803
HyperNeRF32 hours22.40.814
TiNeuVox-S (ours)10 mins23.40.813
TiNeuVox-B (ours)30 mins24.30.837

Acknowledgements

This repository is partially based on DirectVoxGO and D-NeRF. Thanks for their awesome works.

Citation

If you find this repository/work helpful in your research, welcome to cite the paper and give a ⭐.

@inproceedings{TiNeuVox,
  author = {Fang, Jiemin and Yi, Taoran and Wang, Xinggang and Xie, Lingxi and Zhang, Xiaopeng and Liu, Wenyu and Nie\ss{}ner, Matthias and Tian, Qi},
  title = {Fast Dynamic Radiance Fields with Time-Aware Neural Voxels},
  year = {2022},
  booktitle = {SIGGRAPH Asia 2022 Conference Papers}
}