Home

Awesome

AvatarCap: Animatable Avatar Conditioned Monocular Human Volumetric Capture (ECCV 2022)

Zhe Li, Zerong Zheng, Hongwen Zhang, Chaonan Ji, Yebin Liu

Paper | Project

Brief Introduction

To address the ill-posed problem caused by partial observations in monocular human volumetric capture, we present AvatarCap, a framework that introduces animatable avatars into the capture pipeline for high-fidelity reconstruction in both visible and invisible regions.

Using this repo, you can either create an animatable avatar from several 3D scans of one character or reconstruct him/her using the avatar as a prior from a monocular video.

Requirements

SMPL & Pretrained Models

./pretrained_models
├── avatar_net
│   └── example               # the avatar network of the character in the example dataset
│   └── example_finetune_tex  # the avatar network with more high-quality texture
├── recon_net                 # reconstruction network which is general to arbitrary subjects
├── normal_net                # normal estimation network used in data preprocessing

Run on Example Dataset

Example Dataset

Train GeoTexAvatar

python main.py -c ./configs/example.yaml -m train

Test AvatarCap or GeoTexAvatar

python main.py -c ./configs/example.yaml -m test

Run on Customized Data

Check DATA.md for processing your own data.

Acknowledgement

Some codes are based on PIFuHD, pix2pixHD, SCANimate, POP and Animatable NeRF. We thank the authors for their great work!

License

MIT License. SMPL-related files are subject to the license of SMPL.

Citation

If you find our code, data or paper is useful to your research, please consider citing:

@InProceedings{li2022avatarcap,
    title={AvatarCap: Animatable Avatar Conditioned Monocular Human Volumetric Capture},
    author={Li, Zhe and Zheng, Zerong and Zhang, Hongwen and Ji, Chaonan and Liu, Yebin},
    booktitle={European Conference on Computer Vision (ECCV)},
    month={October},
    year={2022},
}