Home

Awesome

<div align="center">

<b>GaussianAvatar</b>: Towards Realistic Human Avatar Modeling from a Single Video via Animatable 3D Gaussians

Liangxiao Hu<sup>1,†</sup>, Hongwen Zhang<sup>2</sup>, Yuxiang Zhang<sup>3</sup>, Boyao Zhou<sup>3</sup>, Boning Liu<sup>3</sup>, Shengping Zhang<sup>1,*</sup>, Liqiang Nie<sup>1</sup>,

<sup>1</sup>Harbin Institute of Technology <sup>2</sup>Beijing Normal University <sup>3</sup>Tsinghua University

<sup>*</sup>Corresponding author <sup>   †</sup>Work done during an internship at Tsinghua University

Projectpage · Paper · Video

</div>

:mega: Updates

[4/3/2024] The pretrained models for the other three people from People Snapshot are released on OneDrive.

[7/2/2024] The scripts for your own video are released.

[23/1/2024] Training and inference codes for People Snapshot are released.

Introduction

We present GaussianAvatar, an efficient approach to creating realistic human avatars with dynamic 3D appearances from a single video.

Installation

To deploy and run GaussianAvatar, run the following scripts:

conda env create --file environment.yml
conda activate gs-avatar

Then, compile diff-gaussian-rasterization and simple-knn as in 3DGS repository.

Download models and data

smpl_files
 └── smpl
   ├── SMPL_FEMALE.pkl
   ├── SMPL_MALE.pkl
   └── SMPL_NEUTRAL.pkl
 └── smplx
   ├── SMPLX_FEMALE.npz
   ├── SMPLX_MALE.npz
   └── SMPLX_NEUTRAL.npz

Run on People Snapshot dataset

We take the subject m4c_processed for example.

Training

python train.py -s $gs_data_path/m4c_processed -m output/m4c_processed --train_stage 1

Evaluation

python eval.py -s $gs_data_path/m4c_processed -m output/m4c_processed --epoch 200

Rendering novel pose

python render_novel_pose.py -s $gs_data_path/m4c_processed -m output/m4c_processed --epoch 200

Run on Your Own Video

Preprocessing

smpl_files
 ├── images
 ├── masks
 ├── cameras.npz
 └── poses_optimized.npz
cd scripts & python sample_romp2gsavatar.py
python gen_pose_map_cano_smpl.py

Training for Stage 1

cd .. &  python train.py -s $path_to_data/$subject -m output/{$subject}_stage1 --train_stage 1 --pose_op_start_iter 10

Training for Stage 2

cd scripts & python export_stage_1_smpl.py
python render_pred_smpl.py
python gen_pose_map_our_smpl.py
cd .. &  python train.py -s $path_to_data/$subject -m output/{$subject}_stage2 --train_stage 2 --stage1_out_path $path_to_stage1_net_save_path

Todo

Citation

If you find this code useful for your research, please consider citing:

@inproceedings{hu2024gaussianavatar,
        title={GaussianAvatar: Towards Realistic Human Avatar Modeling from a Single Video via Animatable 3D Gaussians},
        author={Hu, Liangxiao and Zhang, Hongwen and Zhang, Yuxiang and Zhou, Boyao and Liu, Boning and Zhang, Shengping and Nie, Liqiang},
        booktitle={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
        year={2024}
}

Acknowledgements

This project is built on source codes shared by Gaussian-Splatting, POP, HumanNeRF and InstantAvatar.