Awesome
Local 4D Representation for Dynamic Human
Project Page | Video | Paper
This repository contains the code for the ECCV'2022 paper "LoRD: Local 4D Implicit Representation for High-Fidelity Dynamic Human Modeling".
In this paper, we introduce a local 4D implicit representation for dynamic human, which has the merits of both 4D human modeling and local representation, and enables high-fidelity reconstruction from sparse point clouds or RGB-D videos.
If you have any question, please contact Boyan Jiang byjiang18@fudan.edu.cn.
Citation
If you use our code for any purpose, please consider citing:
@inProceedings{jiang2022lord,
title={LoRD: Local 4D Implicit Representation for High-Fidelity Dynamic Human Modeling},
author={Boyan Jiang and Xinlin Ren and Mingsong Dou and Xiangyang Xue and Yanwei Fu and Yinda Zhang},
booktitle={ECCV},
year={2022}
}
Prerequisites
- PyTorch (test with Python 3.7, PyTorch 1.5.1, CUDA 10.1 on Ubuntu 16.04)
- PyTorch3D (https://github.com/facebookresearch/pytorch3d)
- Chumpy
pip install ./chumpy
- Other dependencies
pip install -r requirements.txt
- Compile the extension module for evaluation
python setup.py build_ext --inplace
Data and Model
Dataset
The CAPE dataset is used for training and evaluation. Please check the official website for more details.
We provide some demo data and other necessary files in this link to show the usage of the code.
unzip lord_data.zip -d ./data
Pre-trained Model
The LoRD model pre-trained on 100 motion sequences can be downloaded from
this link.
Please unzip to the out/lord/checkpoints
folder.
mkdir -p out/lord/checkpoints
unzip lord_model.zip -d out/lord/checkpoints
Quick Demo
We prepare data samples for running the demo in data
folder.
First, you need to process raw data by using:
python scripts/cape_data_process.py
This will create ground truth clothed meshes, SMPL body meshes,
point cloud sequences for training and testing in dataset
folder.
Then you can run LoRD on different types of input observations via following instructions:
# 4D Reconstruction from Sparse Points
python reconstruct.py lord \
--pcl_type pcl_test \
--exp_name fit_sparse_pcl \
--seq_name 03284_shortlong_simple_87
# Non-Rigid Depth Fusion
python scripts/depth2pcl.py # convert depth images to oriented point clouds
python reconstruct.py lord \
--pcl_type depth_pcl \
--exp_name fit_depth_pcl \
--seq_name 00134_longlong_twist_trial2_21
We provide the rendered depth images in the demo data, you can
render your own meshes by using scripts/render.py
, for example:
python scripts/render.py \
-i dataset/02474_longshort_ROM_lower_258/mesh
The depth images will be saved to the input mesh folder by default.
Note: We only provide meshes without texture information due to
data copyright considerations. The CAPE raw scan with texture
is available upon request. If you have colored point clouds, please
enable the --texture
flag to trigger our texture model.
Mesh Generation
When finishing the optimization of LoRD, you can generate mesh sequence via:
# 4D Reconstruction from Sparse Points
python generate_mesh.py lord \
--exp_name fit_sparse_pcl \
--pcl_type pcl_test \
--seq_name 03284_shortlong_simple_87
# Non-Rigid Depth Fusion
python generate_mesh.py lord \
--exp_name fit_depth_pcl \
--pcl_type depth_pcl \
--seq_name 00134_longlong_twist_trial2_21
The generated meshes are saved to the folder
out/lord/<exp_name>/vis
Inner Body Refinement
The above demos use ground truth SMPL meshes by default, you can also estimate SMPL parameters by 4D representation method H4D. We provide the demo code to use our Inner Body Refinement:
- Use the Linear Motion Model (PCA layer) from H4D to fit the input point clouds.
python h4d_fitting.py \
--pcl_type depth_pcl \
--seq_name 02474_longshort_ROM_lower_258
The fitted SMPL meshes can be found in dataset/02474_longshort_ROM_lower_258
.
- Optimize LoRD with the initial estimated SMPL.
python reconstruct.py lord \
--pcl_type depth_pcl \
--exp_name fit_depth_pcl_h4d_pose \
--seq_name 02474_longshort_ROM_lower_258 \
--use_h4d_smpl
- Refine the initial SMPL estimation and perform optimization with the refined SMPL.
python reconstruct.py lord \
--pcl_type depth_pcl \
--exp_name fit_depth_pcl_h4d_pose_refine \
--seq_name 02474_longshort_ROM_lower_258 \
--use_h4d_smpl \
--smpl_refine
- Generate mesh sequence from the optimized latent codes.
python generate_mesh.py lord \
--pcl_type depth_pcl \
--exp_name fit_depth_pcl_h4d_pose_refine \
--seq_name 02474_longshort_ROM_lower_258 \
--use_h4d_smpl
The generated meshes are saved to the folder
out/lord/fit_depth_pcl_h4d_pose_refine/vis/02474_longshort_ROM_lower_258
Training
You can train LoRD model from scratch via:
python train.py lord
Evaluation
We borrow some codes from ONet
to evaluate the accuracy of reconstructed shapes with Chamfer
Distance, Normal Consistency and F-Score. Please check eval.py
for more details.
Further Information
This project is related to LIG, OFLow, 4D-CR, and H4D. If you are interested in local 3D representation and 4D representation, please check their projects which are previous works in these areas.
License
Apache License Version 2.0