Awesome
<!-- PROJECT LOGO --> <p align="center"> <h1 align="center">SCARF: Capturing and Animation of Body and Clothing from Monocular Video </h1> <!-- <p align="center"> <a href="https://ps.is.tuebingen.mpg.de/person/yxiu"><strong>Yao Feng</strong></a> · <a href="https://ps.is.tuebingen.mpg.de/person/jyang"><strong>Jinlong Yang</strong></a> · <a href="https://ps.is.tuebingen.mpg.de/person/black"><strong>Michael J. Black</strong></a> . <a href="https://people.inf.ethz.ch/pomarc/"><strong>Marc Pollefeys</strong></a> . <a href="https://ps.is.mpg.de/person/tbolkart"><strong>Timo Bolkart</strong></a> </p> <h2 align="center">SIGGRAPH Asia 2022 conference </h2> --> <div align="center"> <img src="Doc/images/teaser.gif" alt="teaser" width="100%"> </div> </p>This is the Pytorch implementation of SCARF. More details please check our Project page.
SCARF extracts a 3D clothed avatar from a monocular video.
SCARF allows us to synthesize new views of the reconstructed avatar, and to animate the avatar with SMPL-X identity shape and pose control.
The disentanglement of thebody and clothing further enables us to transfer clothing between subjects for virtual try-on applications.
The key features:
- animate the avatar by changing body poses (including hand articulation and facial expressions),
- synthesize novel views of the avatar, and
- transfer clothing between avatars for virtual try-on applications.
Getting Started
Clone the repo:
git clone https://github.com/yfeng95/SCARF
cd SCARF
Requirements
conda create -n scarf python=3.9
conda activate scarf
pip install -r requirements.txt
If you have problems when installing pytorch3d, please follow their instructions.
Download data
bash fetch_data.sh
Visualization
- check training frames:
python main_demo.py --vis_type capture --frame_id 0
- novel view synthesis of given frame id:
python main_demo.py --vis_type novel_view --frame_id 0
- extract mesh and visualize
python main_demo.py --vis_type extract_mesh --frame_id 0
You can go to our project page and play with the extracted meshes.
<p align="center"> <img src="Doc/images/mesh.gif"> </p>- animation
python main_demo.py --vis_type animate
- clothing transfer
# apply clothing from other model
python main_demo.py --vis_type novel_view --clothing_model_path exps/snapshot/male-3-casual
# transfer clothing to new body
python main_demo.py --vis_type novel_view --body_model_path exps/snapshot/male-3-casual
<p align="center">
<img src="Doc/images/clothing_transfer.png">
</p>
More data and trained models can be found here, you can download and put them into ./exps
.
Training
- training with SCARF video example
bash train.sh
- training with other videos
check here to prepare data with your own videos, then change the data_cfg accordingly.
TODO
- add more processed data and trained models
- code for refining the pose of trained models
- with instant ngp
Citation
@inproceedings{Feng2022scarf,
author = {Feng, Yao and Yang, Jinlong and Pollefeys, Marc and Black, Michael J. and Bolkart, Timo},
title = {Capturing and Animation of Body and Clothing from Monocular Video},
year = {2022},
booktitle = {SIGGRAPH Asia 2022 Conference Papers},
articleno = {45},
numpages = {9},
location = {Daegu, Republic of Korea},
series = {SA '22}
}
Acknowledgments
We thank Sergey Prokudin, Weiyang Liu, Yuliang Xiu, Songyou Peng, Qianli Ma for fruitful discussions, and PS members for proofreading. We also thank Betty Mohler, Tsvetelina Alexiadis, Claudia Gallatz, and Andres Camilo Mendoza Patino for their supports with data.
Special thanks to Boyi Jiang and Sida Peng for sharing their data.
Here are some great resources we benefit from:
- FasterRCNN for detection
- RobustVideoMatting for background segmentation
- cloth-segmentation for clothing segmentation
- PIXIE for SMPL-X parameters estimation
- smplx for body models
- PyTorch3D for Differential Rendering
Some functions are based on other repositories, we acknowledge the origin individually in each file.
License
This code and model are available for non-commercial scientific research purposes as defined in the LICENSE file. By downloading and using the code and model you agree to the terms in the LICENSE.
Disclosure
MJB has received research gift funds from Adobe, Intel, Nvidia, Meta/Facebook, and Amazon. MJB has financial interests in Amazon, Datagen Technologies, and Meshcapade GmbH. While MJB is a part-time employee of Meshcapade, his research was performed solely at, and funded solely by, the Max Planck Society. While TB is part-time employee of Amazon, this research was performed solely at, and funded solely by, MPI.
Contact
For more questions, please contact yao.feng@tue.mpg.de For commercial licensing, please contact ps-licensing@tue.mpg.de