Awesome
StereoPIFu: Depth Aware Clothed Human Digitization via Stereo Vision
| Project Page | Paper |
This repository contains a pytorch implementation of "StereoPIFu: Depth Aware Clothed Human Digitization via Stereo Vision (CVPR 2021)".<br/> Authors: Yang Hong, Juyong Zhang, Boyi Jiang, Yudong Guo, Ligang Liu and Hujun Bao.
Requirements
- Python 3
- Pytorch (<=1.4.0, some compatibility issues may occur in higher versions of pytorch)
- tqdm
- opencv-python
- scikit-image
- openmesh
for building evaluation data
- pybind11,we recommend "pip install pybind11[global]" for installation.
- gcc
- cmake
Run the following code to install all pip packages:
pip install -r requirements.txt
<span id="Building_Evaluation"></span>
Building Evaluation Data
Preliminary
Run the following script to compile & generate the relevant python module, which is used to render left/right color/depth/mask images from the textured/colored mesh.
cd GenEvalData
bash build.sh
cd ..
Usage
#demo, for textured mesh
python GenEvalData.py \
--tex_mesh_path="TempData/SampleData/rp_dennis_posed_004_100k.obj" \
--tex_img_path="TempData/SampleData/rp_dennis_posed_004_dif_2k.jpg" \
--save_dir="./TempData/TexMesh" \
--save_postfix="tex"
#demo, for colored mesh
python GenEvalData.py \
--color_mesh_path="TempData/SampleData/normalized_mesh_0089.off" \
--save_dir="./TempData/ColorMesh" \
--save_postfix="color"
These samples are from renderpeople and BUFF dataset.<br/> Note: the mesh used for rendering needs to be located in a specific bounding box.
Inference
Preliminary
- Run the following script to compile & generate deformable convolution from AANet.
cd AANetPlusFeature/deform_conv bash build.sh cd ../..
- Download the trained model and mv to the "Models" folder.
- Generate evalution data with aboved "Building Evaluation Data", or capture real data by ZED Camera (we test on ZED camera v1). <br/>Note: rectifying left/right images is required before using ZED camera.
Demo
bash eval.sh
The reconsturction result will be saved to "Results" folder.<br/> Note: At least 10GB GPU memory is recommended to run StereoPIFu model.
<!-- citing --> <div class="container"> <div class="row "> <div class="col-12"> <h2>Citation</h2> <pre style="background-color: #e9eeef;padding: 1.25em 1.5em"><code>@inproceedings{yang2021stereopifu, author = {Yang Hong and Juyong Zhang and Boyi Jiang and Yudong Guo and Ligang Liu and Hujun Bao}, title = {StereoPIFu: Depth Aware Clothed Human Digitization via Stereo Vision}, booktitle = {{IEEE/CVF} Conference on Computer Vision and Pattern Recognition (CVPR)}, year = {2021} }</code></pre> </div> </div> </div>Contact
If you have questions, please contact hymath@mail.ustc.edu.cn.