Home

Awesome

<div align="center">

<b>Monogaussianavatar</b>: Monocular gaussian point-based head avatar

Yufan Chen<sup>†,1</sup>, Lizhen Wang<sup>2</sup>, Qijing Li<sup>2</sup>, Hongjiang Xiao<sup>3</sup>, Shengping Zhang<sup>*,1</sup>, Hongxun Yao<sup>1</sup>, Yebin Liu<sup>2</sup>

<p><sup>1</sup>Harbin Institute of Technology &nbsp;&nbsp;<sup>2</sup>Tsinghua Univserity &nbsp;&nbsp;<sup>3</sup>Communication University of China <br><sup>*</sup>Corresponding author &nbsp;&nbsp;<sup>&dagger;</sup>Work done during an internship at Tsinghua Univserity<p>

Paper | Video Youtube | Project Page

<img src="assets/teaser.png" width="800" height="350"/> </div>

Getting Started

conda install -c fvcore -c iopath -c conda-forge fvcore iopath
conda install pytorch3d
cd submodules/
git clone https://github.com/graphdeco-inria/gaussian-splatting --recursive
cd gaussian-splatting/
pip install -e submodules/diff-gaussian-rasterization
cd ..

Preparing dataset

Our data format is the same as IMavatar. You can download a preprocessed dataset from Google drive (subject 1 and 2)

If you'd like to generate your own dataset, please follow intructions in the IMavatar repo.

Link the dataset folder to ./data/datasets. Link the experiment output folder to ./data/experiments.

Pre-trained model

Download a pretrained model from . Uncompress and put into the experiment folder ./data/experiments.

Training

python scripts/exp_runner.py ---conf ./confs/subject1.conf [--is_continue]

Evaluation

Set the is_eval flag for evaluation, optionally set checkpoint (if not, the latest checkpoint will be used) and load_path

python scripts/exp_runner.py --conf ./confs/subject1.conf --is_eval [--checkpoint 60] [--load_path ...]

GPU requirement

We train our models with a single Nvidia 24GB RTX3090 GPU.

Citation

If you find our code or paper useful, please cite as:

@inproceedings{chen2024monogaussianavatar,
  title={Monogaussianavatar: Monocular gaussian point-based head avatar},
  author={Chen, Yufan and Wang, Lizhen and Li, Qijing and Xiao, Hongjiang and Zhang, Shengping and Yao, Hongxun and Liu, Yebin},
  booktitle={ACM SIGGRAPH 2024 Conference Papers},
  pages={1--9},
  year={2024}
}