Home

Awesome

Vid2Avatar: 3D Avatar Reconstruction from Videos in the Wild via Self-supervised Scene Decomposition

Paper | Video Youtube | Project Page | SynWild Data

Official Repository for CVPR 2023 paper Vid2Avatar: 3D Avatar Reconstruction from Videos in the Wild via Self-supervised Scene Decomposition.

<img src="assets/teaser.png" width="800" height="223"/>

Getting Started

mkdir code/lib/smpl/smpl_model/
mv /path/to/smpl/models/basicModel_f_lbs_10_207_0_v1.0.0.pkl code/lib/smpl/smpl_model/SMPL_FEMALE.pkl
mv /path/to/smpl/models/basicmodel_m_lbs_10_207_0_v1.0.0.pkl code/lib/smpl/smpl_model/SMPL_MALE.pkl

Download preprocessed demo data

You can quickly start trying out Vid2Avatar with a preprocessed demo sequence including the pre-trained checkpoint. This can be downloaded from Google drive which is originally a video clip provided by NeuMan. Put this preprocessed demo data under the folder data/ and put the folder checkpoints under outputs/parkinglot/.

Training

Before training, make sure that the metaninfo in the data config file /code/confs/dataset/video.yaml does match the expected training video. You can also continue the training by changing the flag is_continue in the model config file code/confs/model/model_w_bg. And then run:

cd code
python train.py

The training usually takes 24-48 hours. The validation results can be found at outputs/.

Test

Run the following command to obtain the final outputs. By default, this loads the latest checkpoint.

cd code
python test.py

3D Visualization

We use AITViewer to visualize the human models in 3D. First install AITViewer: pip install aitviewer imgui==1.4.1, and then run the following command to visualize the canonical mesh (--mode static) or deformed mesh sequence (--mode dynamic):

cd visualization 
python vis.py --mode {MODE} --path {PATH}
<p align="center"> <img src="assets/parkinglot_360.gif" width="623" height="346"/> </p>

Play on custom video

<p align="center"> <img src="assets/roger.gif" width="240" height="270"/> <img src="assets/exstrimalik.gif" width="240" height="270"/> <img src="assets/martial.gif" width="240" height="270"/> </p>

Acknowledgement

We have used codes from other great research work, including VolSDF, NeRF++, SMPL-X, Anim-NeRF, I M Avatar and SNARF. We sincerely thank the authors for their awesome work! We also thank the authors of ICON and SelfRecon for discussing experiment.

Related Works

Here are more recent related human body reconstruction projects from our team:

@inproceedings{guo2023vid2avatar,
      title={Vid2Avatar: 3D Avatar Reconstruction from Videos in the Wild via Self-supervised Scene Decomposition},
      author={Guo, Chen and Jiang, Tianjian and Chen, Xu and Song, Jie and Hilliges, Otmar},    
      booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
      month     = {June},
      year      = {2023},
    }