Home

Awesome

Consistent4D: Consistent 360Ā° Dynamic Object Generation from Monocular Video

Yanqin Jiang<sup>1</sup>, Li Zhang<sup>2</sup>, Jin Gao<sup>1</sup>, Weimin Hu<sup>1</sup>, Yao Yao<sup>3 āœ‰</sup> <br> <sup>1</sup>CASIA, <sup>2</sup>Fudan University, <sup>3</sup>Nanjin University

| Project Page | arXiv | Paper | Video (Coming soon) | Data (only input video) | Data (test_dataset) |

Demo GIF

Abstract

In this paper, we present Consistent4D, a novel approach for generating dynamic objects from uncalibrated monocular videos.

Uniquely, we cast the 360-degree dynamic object reconstruction as a 4D generation problem, eliminating the need for tedious multi-view data collection and camera calibration. This is achieved by leveraging the object-level 3D-aware image diffusion model as the primary supervision signal for training Dynamic Neural Radiance Fields (DyNeRF). Specifically, we propose a Cascade DyNeRF to facilitate stable convergence and temporal continuity under the supervision signal which is discrete along the time axis. To achieve spatial and temporal consistency, we further introduce an Interpolation-driven Consistency Loss. It is optimized by minimizing the discrepancy between rendered frames from DyNeRF and interpolated frames from a pretrained video interpolation model.

Extensive experiments show that our Consistent4D can perform competitively to prior art alternatives, opening up new possibilities for 4D dynamic object generation from monocular videos, whilst also demonstrating advantage for conventional text-to-3D generation tasks. Our project page is https://consistent4d.github.io/

Important notes

Recently I found some works trained on Objaverse animated models and adopted the test dataset in Consistent4D. However, Objaverse contains six out of seven animated objects in our work, and it is suggested to filter them when training on that dataset for a fiar test. The uid of objects in test dataset is provided in test_dataset_uid.txt

News

[2024.07.17] šŸŽ‰ We release our new 4D generation work, Animate3D! It is able to animate any 3D object (mesh/Gaussian) and could export animated mesh files ready for import into standard 3D softwares and Game Engines! <br> [2024.03.25] šŸŽ‰ Our new work STAG4D is avaliable on arxiv! The results produced by STAG4D is way better than those of Consistent4D;. Welcome to keep an eye on it! <br> [2024.01.23] šŸŽ‰ All codes, including evaluation scripts, are released! Thanks for your interests! (The refractored code seems to be able to generate slightly better results than what we used before. Don't know why, but happy to hear this.) <br> [2024.01.16] šŸŽ‰šŸ˜œ Consistent4D is accepted by ICLR 2024! Thanks for all! Our paper will soon be updated according to the suggestions in rebuttal pharse. <br> [2023.12.10] The code of Consistent4D is released! The code is refractored and optimized to accelerate training (~2 hours on a V100 GPU now!). For the convenience of quantitative comparison, we provide test dataset used in our paper (our results on test dataset are also included). <br> [2023.11.07] The paper of Consistent4D is avaliable at arXiv. We also provide input videos used in our paper/project page here. For our results on the input videos, please visit our github project page to download them (see folder gallery).

Installation

The installation is the same as the original threestudio, so skip it if you have already installed threestudio.

The code is tested on V100 GPU with python 3.9.16, CUDA 11.1 and torch 1.12.1+cu113.

# Recommand to use annoconda
conda create -n consistent4d python=3.9
conda activate consistent4d
# Clone the repo
git clone https://github.com/yanqinJiang/Consistent4D
cd Consistent4D

# Build the environment
# Install torch: the code is tested with torch1.12.1+cu113
pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu113
# Install other packages
pip install -r requirements.txt


# Prepare Zero123
cd load/zero123
wget https://zero123.cs.columbia.edu/assets/zero123-xl.ckpt

# Prepare video interpolation module
cp /path/to/flownet/checkpoint ./extern/RIFE/flownet.pkl

Data preparation

We provide the processed input data used in our paper. If you want to use your own data, please follow the steps below for pre-processing:

The structure of the input data should be like

-image_seq_name
    - 0.png
    - 1.png
    - 2.png
    ...

Training

Dynamic NeRF Training

# We provide three different configs (consistent4d_low_vram.yaml/consistent4d.yaml/consistent4d-4layers.yaml), requireing 24G/32G/40G VRAM for training, respectively. 
# The results in the paper and project page are produced by model in the config consistent4d.yaml. consistent4d_4layers.ymal is newly added, aiming at better results. 
# If you have aceess to GPU with enough memory, we higly recommand to set data.resolution_milestones in config to a larger number, i.e., 400, and you will get even better resutls.

python launch.py --config configs/consistent4d.yaml --train --gpu 0 data.image_seq_path=./load/demo/blooming_rose

Video enhancer

Video enhancer, as a post-processing step, can only slightly enhance the quality (sometimes it cannot enhance at all), while it requires tedious workflow to prepare the training data. So feel free to skip it. All results (qualitative/quantitative) in our paper and project page are without video enhancer if not specially mentioned. We will claim video enhancer as an optional stage in the updated paper. To use video enhancer:

Evaluation

To evaluate, first transform rgba gt images to images with white background!!!. Then, download the pre-trained model (i3d_pretrained_400.pt) for calculating FVD here (This link is borrowed from DisCo, and the file for calculating FVD is a refractoring of their evaluation code. Thanks for their work!). Then, organize the reuslt folder as follows:

ā”œā”€ā”€ gt
ā”‚   ā”œā”€ā”€ object_0
ā”‚   ā”‚   ā”œā”€ā”€ eval_0
ā”‚   ā”‚   ā”‚   ā”œā”€ā”€ 0.png
ā”‚   ā”‚   ā”‚   ā””ā”€ā”€ ...
ā”‚   ā”‚   ā”œā”€ā”€ eval_1
ā”‚   ā”‚   ā”‚   ā””ā”€ā”€ ...
ā”‚   ā”‚   ā””ā”€ā”€ ...
ā”‚   ā”œā”€ā”€ object_1
ā”‚   ā”‚   ā””ā”€ā”€ ...
ā”‚   ā””ā”€ā”€ ...
ā”œā”€ā”€ pred
ā”‚   ā”œā”€ā”€ object_0
ā”‚   ā”‚   ā”œā”€ā”€ eval_0
ā”‚   ā”‚   ā”‚   ā”œā”€ā”€ 0.png
ā”‚   ā”‚   ā”‚   ā””ā”€ā”€ ...
ā”‚   ā”‚   ā”œā”€ā”€ eval_1
ā”‚   ā”‚   ā”‚   ā””ā”€ā”€ ...
ā”‚   ā”‚   ā””ā”€ā”€ ...
ā”‚   ā”œā”€ā”€ object_1
ā”‚   ā”‚   ā””ā”€ā”€ ...
ā”‚   ā””ā”€ā”€ ...

Next, run

cd evaluation
# image-level metrics
python compute_image_level_metrics.py --gt_root /path/to/gt --pred_root /path/to/pred
# video-level metrics
python compute_fvd.py --gt_root /path/to/gt --pred_root /path/to/pred --model_path /path/to/i3d_pretrained_400.pt

TODO

We have interest in continuously improving our work and add new features, i.e., advanced 4D representations and supervision signals. If you meet any problem during use or have any suggestions on improvement, please feel free to open an issue. We thanks for your feedback : ) !

Tips

Acknowledgement

Our code is based on Threestudio. We thank the authors for their effort in building such a great codebase. <br> The video interpolation model employed in our work is RIFE, which is continuously improved by its authors for real-world application. Thanks for their great work!

Citation

@inproceedings{
jiang2024consistentd,
title={Consistent4D: Consistent 360{\textdegree} Dynamic Object Generation from Monocular Video},
author={Yanqin Jiang and Li Zhang and Jin Gao and Weiming Hu and Yao Yao},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=sPUrdFGepF}
}