Awesome
AvatarCap: Animatable Avatar Conditioned Monocular Human Volumetric Capture (ECCV 2022)
Zhe Li, Zerong Zheng, Hongwen Zhang, Chaonan Ji, Yebin Liu
Brief Introduction
To address the ill-posed problem caused by partial observations in monocular human volumetric capture, we present AvatarCap, a framework that introduces animatable avatars into the capture pipeline for high-fidelity reconstruction in both visible and invisible regions.
Using this repo, you can either create an animatable avatar from several 3D scans of one character or reconstruct him/her using the avatar as a prior from a monocular video.
Requirements
- Python 3
- requirements.txt
- CUDA tested on 11.1
SMPL & Pretrained Models
- Download SMPL file, place pkl files to
./smpl_files
. - Download pretrained models, unzip it to
./pretrained_models
. The contents of this folder are listed below:
./pretrained_models
├── avatar_net
│ └── example # the avatar network of the character in the example dataset
│ └── example_finetune_tex # the avatar network with more high-quality texture
├── recon_net # reconstruction network which is general to arbitrary subjects
├── normal_net # normal estimation network used in data preprocessing
Run on Example Dataset
Example Dataset
- Download example dataset (Google Drive or Tsinghua Cloud) which contains training data generated from 22 3D scans of one character and testing data generated from a monocular RGB video. This example dataset has been preprocessed and can be directly used for training and testing.
- Unzip it somewhere, denoted as
EXAMPLE_DATA_DIR
.
Train GeoTexAvatar
- Specify
training_data_dir
inconfigs/example.yaml
asEXAMPLE_DATA_DIR/training
. - Run the following script.
python main.py -c ./configs/example.yaml -m train
- Network checkpoints will be saved in
./results/example/training
.
Test AvatarCap or GeoTexAvatar
- Specify
testing_data_dir
inconfigs/example.yaml
asEXAMPLE_DATA_DIR/testing
. - Run the following script.
python main.py -c ./configs/example.yaml -m test
- Output results will be saved in
./results/example/testing
.
Run on Customized Data
Check DATA.md for processing your own data.
Acknowledgement
Some codes are based on PIFuHD, pix2pixHD, SCANimate, POP and Animatable NeRF. We thank the authors for their great work!
License
MIT License. SMPL-related files are subject to the license of SMPL.
Citation
If you find our code, data or paper is useful to your research, please consider citing:
@InProceedings{li2022avatarcap,
title={AvatarCap: Animatable Avatar Conditioned Monocular Human Volumetric Capture},
author={Li, Zhe and Zheng, Zerong and Zhang, Hongwen and Ji, Chaonan and Liu, Yebin},
booktitle={European Conference on Computer Vision (ECCV)},
month={October},
year={2022},
}