Awesome
<div align="center"><b>GaussianAvatar</b>: Towards Realistic Human Avatar Modeling from a Single Video via Animatable 3D Gaussians
Liangxiao Hu<sup>1,†</sup>, Hongwen Zhang<sup>2</sup>, Yuxiang Zhang<sup>3</sup>, Boyao Zhou<sup>3</sup>, Boning Liu<sup>3</sup>, Shengping Zhang<sup>1,*</sup>, Liqiang Nie<sup>1</sup>,
<sup>1</sup>Harbin Institute of Technology <sup>2</sup>Beijing Normal University <sup>3</sup>Tsinghua University
<sup>*</sup>Corresponding author <sup> †</sup>Work done during an internship at Tsinghua University
Projectpage · Paper · Video
</div>:mega: Updates
[4/3/2024] The pretrained models for the other three people from People Snapshot are released on OneDrive.
[7/2/2024] The scripts for your own video are released.
[23/1/2024] Training and inference codes for People Snapshot are released.
Introduction
We present GaussianAvatar, an efficient approach to creating realistic human avatars with dynamic 3D appearances from a single video.
Installation
To deploy and run GaussianAvatar, run the following scripts:
conda env create --file environment.yml
conda activate gs-avatar
Then, compile diff-gaussian-rasterization
and simple-knn
as in 3DGS repository.
Download models and data
- SMPL/SMPL-X model: register and download SMPL and SMPL-X, and put these files in
assets/smpl_files
. The folder should have the following structure:
smpl_files
└── smpl
├── SMPL_FEMALE.pkl
├── SMPL_MALE.pkl
└── SMPL_NEUTRAL.pkl
└── smplx
├── SMPLX_FEMALE.npz
├── SMPLX_MALE.npz
└── SMPLX_NEUTRAL.npz
- Data: download the provided data from OneDrive. These data include
assets.zip
,gs_data.zip
andpretrained_models.zip
. Please unzipassets.zip
to the corresponding folder in the repository and unzip others togs_data_path
andpretrained_models_path
.
Run on People Snapshot dataset
We take the subject m4c_processed
for example.
Training
python train.py -s $gs_data_path/m4c_processed -m output/m4c_processed --train_stage 1
Evaluation
python eval.py -s $gs_data_path/m4c_processed -m output/m4c_processed --epoch 200
Rendering novel pose
python render_novel_pose.py -s $gs_data_path/m4c_processed -m output/m4c_processed --epoch 200
Run on Your Own Video
Preprocessing
- masks and poses: use the bash script
scripts/custom/process-sequence.sh
in InstantAvatar. The data folder should have the followings:
smpl_files
├── images
├── masks
├── cameras.npz
└── poses_optimized.npz
- data format: we provide a script to convert the pose format of romp to ours (remember to change the
path
in L50 and L51):
cd scripts & python sample_romp2gsavatar.py
- position map of the canonical pose: (remember to change the corresponding
path
)
python gen_pose_map_cano_smpl.py
Training for Stage 1
cd .. & python train.py -s $path_to_data/$subject -m output/{$subject}_stage1 --train_stage 1 --pose_op_start_iter 10
Training for Stage 2
- export predicted smpl:
cd scripts & python export_stage_1_smpl.py
- visualize the optimized smpl (optional):
python render_pred_smpl.py
- generate the predicted position map:
python gen_pose_map_our_smpl.py
- start to train
cd .. & python train.py -s $path_to_data/$subject -m output/{$subject}_stage2 --train_stage 2 --stage1_out_path $path_to_stage1_net_save_path
Todo
- Release the reorganized code and data.
- Provide the scripts for your own video.
- Provide the code for real-time annimation.
Citation
If you find this code useful for your research, please consider citing:
@inproceedings{hu2024gaussianavatar,
title={GaussianAvatar: Towards Realistic Human Avatar Modeling from a Single Video via Animatable 3D Gaussians},
author={Hu, Liangxiao and Zhang, Hongwen and Zhang, Yuxiang and Zhou, Boyao and Liu, Boning and Zhang, Shengping and Nie, Liqiang},
booktitle={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2024}
}
Acknowledgements
This project is built on source codes shared by Gaussian-Splatting, POP, HumanNeRF and InstantAvatar.