Awesome
4D Gaussian Splatting for Real-Time Dynamic Scene Rendering
arXiv Preprint
Project Page| arXiv Paper
Guanjun Wu<sup>1*</sup>, Taoran Yi<sup>2*</sup>, Jiemin Fang<sup>3‡</sup>, Lingxi Xie<sup>3</sup>, </br>Xiaopeng Zhang<sup>3</sup>, Wei Wei<sup>1</sup>,Wenyu Liu<sup>2</sup>, Qi Tian<sup>3</sup> , Xinggang Wang<sup>2‡✉</sup>
<sup>1</sup>School of CS, HUST <sup>2</sup>School of EIC, HUST <sup>3</sup>Huawei Inc.
<sup>*</sup> Equal Contributions. <sup>$\ddagger$</sup> Project Lead. <sup>✉</sup> Corresponding Author.
Our method converges very quickly and achieves real-time rendering speed.
Colab demo: (Thanks camenduru.)
<video width="320" height="240" controls> <source src="assets/teaservideo.mp4" type="video/mp4"> </video> <video width="320" height="240" controls> <source src="assets/cut_roasted_beef_time.mp4" type="video/mp4"> </video>Environmental Setups
Please follow the 3D-GS to install the relative packages.
git clone https://github.com/hustvl/4DGaussians
cd 4DGaussians
git submodule update --init --recursive
conda create -n Gaussians4D python=3.7
conda activate Gaussians4D
pip install -r requirements.txt
pip install -e submodules/depth-diff-gaussian-rasterization
pip install -e submodules/simple-knn
In our environment, we use pytorch=1.13.1+cu116.
Data Preparation
For synthetic scenes:
The dataset provided in D-NeRF is used. You can download the dataset from dropbox.
For real dynamic scenes:
The dataset provided in HyperNeRF is used. You can download scenes from Hypernerf Dataset and organize them as Nerfies. Meanwhile, Plenoptic Dataset could be downloaded from their official websites. To save the memory, you should extract the frames of each video and then organize your dataset as follows.
├── data
│ | dnerf
│ ├── mutant
│ ├── standup
│ ├── ...
│ | hypernerf
│ ├── interp
│ ├── misc
│ ├── virg
│ | dynerf
│ ├── cook_spinach
│ ├── cam00
│ ├── images
│ ├── 0000.png
│ ├── 0001.png
│ ├── 0002.png
│ ├── ...
│ ├── cam01
│ ├── images
│ ├── 0000.png
│ ├── 0001.png
│ ├── ...
│ ├── cut_roasted_beef
| ├── ...
Training
For training synthetic scenes such as bouncingballs
, run
python train.py -s data/dnerf/bouncingballs --port 6017 --expname "dnerf/bouncingballs" --configs arguments/dnerf/bouncingballs.py
You can customize your training config through the config files.
Rendering
Run the following script to render the images.
python render.py --model_path "output/dnerf/bouncingballs/" --skip_train --configs arguments/dnerf/bouncingballs.py &
Evaluation
You can just run the following script to evaluate the model.
python metrics.py --model_path "output/dnerf/bouncingballs/"
Scripts
There are some helpful scripts in scripts/
, please feel free to use them.
Contributions
This project is still under development. Please feel free to raise issues or submit pull requests to contribute to our codebase.
Some source code of ours is borrowed from 3DGS, k-planes,HexPlane, TiNeuVox. We sincerely appreciate the excellent works of these authors.
Acknowledgement
We would like to express our sincere gratitude to @zhouzhenghong-gt for his revisions to our code and discussions on the content of our paper.
Citation
If you find this repository/work helpful in your research, welcome to cite the paper and give a ⭐.
@article{wu20234dgaussians,
title={4D Gaussian Splatting for Real-Time Dynamic Scene Rendering},
author={Wu, Guanjun and Yi, Taoran and Fang, Jiemin and Xie, Lingxi and Zhang, Xiaopeng and Wei Wei and Liu, Wenyu and Tian, Qi and Wang Xinggang},
journal={arXiv preprint arXiv:2310.08528},
year={2023}
}