Awesome
<!-- PROJECT LOGO --> <p align="center"> <h1 align="center">ICON: Implicit Clothed humans Obtained from Normals</h1> <p align="center"> <a href="https://ps.is.tuebingen.mpg.de/person/yxiu"><strong>Yuliang Xiu</strong></a> · <a href="https://ps.is.tuebingen.mpg.de/person/jyang"><strong>Jinlong Yang</strong></a> · <a href="https://ps.is.mpg.de/~dtzionas"><strong>Dimitrios Tzionas</strong></a> · <a href="https://ps.is.tuebingen.mpg.de/person/black"><strong>Michael J. Black</strong></a> </p> <h2 align="center">CVPR 2022</h2> <div align="center"> <img src="./assets/teaser.gif" alt="Logo" width="100%"> </div> <p align="center"> <br> <a href="https://pytorch.org/get-started/locally/"><img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-ee4c2c?logo=pytorch&logoColor=white"></a> <a href="https://pytorchlightning.ai/"><img alt="Lightning" src="https://img.shields.io/badge/-Lightning-792ee5?logo=pytorchlightning&logoColor=white"></a> <a href='https://colab.research.google.com/drive/1-AWeWhPvCTBX0KfMtgtMk10uPU05ihoA?usp=sharing' style='padding-left: 0.5rem;'><img src='https://colab.research.google.com/assets/colab-badge.svg' alt='Google Colab'></a> <a href="https://huggingface.co/spaces/Yuliang/ICON" style='padding-left: 0.5rem;'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-orange'></a><br></br> <a href='https://arxiv.org/abs/2112.09127'> <img src='https://img.shields.io/badge/Paper-PDF-green?style=for-the-badge&logo=arXiv&logoColor=green' alt='Paper PDF'> </a> <a href='https://icon.is.tue.mpg.de/' style='padding-left: 0.5rem;'> <img src='https://img.shields.io/badge/ICON-Page-orange?style=for-the-badge&logo=Google%20chrome&logoColor=orange' alt='Project Page'> <a href="https://discord.gg/Vqa7KBGRyk"><img src="https://img.shields.io/discord/940240966844035082?color=7289DA&labelColor=4a64bd&logo=discord&logoColor=white&style=for-the-badge"></a> <a href="https://youtu.be/hZd6AYin2DE"><img alt="youtube views" title="Subscribe to my YouTube channel" src="https://img.shields.io/youtube/views/hZd6AYin2DE?logo=youtube&labelColor=ce4630&style=for-the-badge"/></a> </p> </p> <br /> <br />News :triangular_flag_on_post:
- [2022/12/15] ICON belongs to the past, ECON is the future!
- [2022/09/12] Apply KeypointNeRF on ICON, quantitative numbers in evaluation
- [2022/07/30] <a href="https://huggingface.co/spaces/Yuliang/ICON" style='padding-left: 0.5rem;'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-orange'></a> <a href='https://colab.research.google.com/drive/1-AWeWhPvCTBX0KfMtgtMk10uPU05ihoA?usp=sharing' style='padding-left: 0.5rem;'><img src='https://colab.research.google.com/assets/colab-badge.svg' alt='Google Colab'></a> are both available
- [2022/07/26] New cloth-refinement module is released, try
-loop_cloth
- [2022/06/13] ETH Zürich students from 3DV course create an add-on for garment-extraction
- [2022/05/16] <a href="https://github.com/Arthur151/ROMP">BEV</a> is supported as optional HPS by <a href="https://scholar.google.com/citations?hl=en&user=fkGxgrsAAAAJ">Yu Sun</a>, see commit #060e265
- [2022/05/15] Training code is released, please check Training Instruction
- [2022/04/26] <a href="https://github.com/Jeff-sjtu/HybrIK">HybrIK (SMPL)</a> is supported as optional HPS by <a href="https://jeffli.site/">Jiefeng Li</a>, see commit #3663704
- [2022/03/05] <a href="https://github.com/YadiraF/PIXIE">PIXIE (SMPL-X)</a>, <a href="https://github.com/mkocabas/PARE">PARE (SMPL)</a>, <a href="https://github.com/HongwenZhang/PyMAF">PyMAF (SMPL)</a> are all supported as optional HPS
Who needs ICON?
-
If you want to Train & Evaluate on PIFu / PaMIR / ICON using your own data, please check dataset.md to prepare dataset, training.md for training, and evaluation.md for benchmark evaluation.
-
Given a raw RGB image, you could get:
- image (png):
- segmented human RGB
- normal maps of body and cloth
- pixel-aligned normal-RGB overlap
- mesh (obj):
- SMPL-(X) body from PyMAF, PIXIE, PARE, HybrIK, BEV
- 3D clothed human reconstruction
- 3D garments (requires 2D mask)
- video (mp4):
- self-rotated clothed human
- image (png):
ICON's intermediate results |
ICON's SMPL Pose Refinement |
Image -- overlapped normal prediction -- ICON -- refined ICON |
3D Garment extracted from ICON using 2D mask |
Instructions
- See docs/installation.md to install all the required packages and setup the models
- See docs/dataset.md to synthesize the train/val/test dataset from THuman2.0
- See docs/training.md to train your own model using THuman2.0
- See docs/evaluation.md to benchmark trained models on CAPE testset
- Add-on: Garment Extraction from Fashion Images, supported by ETH Zürich students as 3DV course project.
Running Demo
cd ICON
# model_type:
# "pifu" reimplemented PIFu
# "pamir" reimplemented PaMIR
# "icon-filter" ICON w/ global encoder (continous local wrinkles)
# "icon-nofilter" ICON w/o global encoder (correct global pose)
# "icon-keypoint" ICON w/ relative-spatial encoding (insight from KeypointNeRF)
python -m apps.infer -cfg ./configs/icon-filter.yaml -gpu 0 -in_dir ./examples -out_dir ./results -export_video -loop_smpl 100 -loop_cloth 200 -hps_type pixie
More Qualitative Results
Comparison with other state-of-the-art methods |
Predicted normals on in-the-wild images with extreme poses |
Citation
@inproceedings{xiu2022icon,
title = {{ICON}: {I}mplicit {C}lothed humans {O}btained from {N}ormals},
author = {Xiu, Yuliang and Yang, Jinlong and Tzionas, Dimitrios and Black, Michael J.},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {13296-13306}
}
Acknowledgments
We thank Yao Feng, Soubhik Sanyal, Qianli Ma, Xu Chen, Hongwei Yi, Chun-Hao Paul Huang, and Weiyang Liu for their feedback and discussions, Tsvetelina Alexiadis for her help with the AMT perceptual study, Taylor McConnell for her voice over, Benjamin Pellkofer for webpage, and Yuanlu Xu's help in comparing with ARCH and ARCH++.
Special thanks to Vassilis Choutas for sharing the code of bvh-distance-queries
Here are some great resources we benefit from:
- MonoPortDataset for Data Processing
- PaMIR, PIFu, PIFuHD, and MonoPort for Benchmark
- SCANimate and AIST++ for Animation
- rembg for Human Segmentation
- PyTorch-NICP for normal-based non-rigid refinement
- smplx, PARE, PyMAF, PIXIE, BEV, and HybrIK for Human Pose & Shape Estimation
- CAPE and THuman for Dataset
- PyTorch3D for Differential Rendering
Some images used in the qualitative examples come from pinterest.com.
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No.860768 (CLIPE Project).
Contributors
Kudos to all of our amazing contributors! ICON thrives through open-source. In that spirit, we welcome all kinds of contributions from the community.
<a href="https://github.com/yuliangxiu/ICON/graphs/contributors"> <img src="https://contrib.rocks/image?repo=yuliangxiu/ICON" /> </a>Contributor avatars are randomly shuffled.
<br>
License
This code and model are available for non-commercial scientific research purposes as defined in the LICENSE file. By downloading and using the code and model you agree to the terms in the LICENSE.
Disclosure
MJB has received research gift funds from Adobe, Intel, Nvidia, Meta/Facebook, and Amazon. MJB has financial interests in Amazon, Datagen Technologies, and Meshcapade GmbH. While MJB was a part-time employee of Amazon during this project, his research was performed solely at, and funded solely by, the Max Planck Society.
Contact
For more questions, please contact icon@tue.mpg.de
For commercial licensing, please contact ps-licensing@tue.mpg.de