Awesome
TCMR: Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video
Qualtitative result | Paper teaser video |
---|---|
News
- Update 22.06.17: Now you can reproduce Table 6! No change on running commands.
- Update 22.06.06: NeuralAnnot SMPL annotations of Human36M are released!
Introduction
This repository is the official Pytorch implementation of Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video. Find more qualitative results here. The base codes are largely borrowed from VIBE.
Installation
TCMR is tested on Ubuntu 20.04 with Pytorch 1.12 + CUDA 11.3 and Python 3.9. Previously, it was tested on Ubuntu 16.04 with Pytorch 1.4 and Python 3.7.10. You may need sudo privilege for the installation.
source scripts/install_pip.sh
If you have a problem related to torchgeometry
, please check this out.
Quick demo
- Download the pre-trained demo TCMR and required data by below command and download SMPL layers from here (male&female) and here (neutral). Put SMPL layers (pkl files) under
${ROOT}/data/base_data/
.
source scripts/get_base_data.sh
- Run demo with options (e.g. render on plain background). See more option details in bottom lines of
demo.py
. - A video overlayed with rendered meshes will be saved in
${ROOT}/output/demo_output/
.
python demo.py --vid_file demo.mp4 --gpu 0
Results
Here I report the performance of TCMR.
See our paper for more details.
Running TCMR
Download pre-processed data (except InstaVariety dataset) from here.
Pre-processed InstaVariety is uploaded by VIBE authors here.
You may also download datasets from sources and pre-process yourself. Refer to this.
Put SMPL layers (pkl files) under ${ROOT}/data/base_data/
.
The data directory structure should follow the below hierarchy.
${ROOT}
|-- data
| |-- base_data
| |-- preprocessed_data
| |-- pretrained_models
Evaluation
- Download pre-trained TCMR weights from here.
- Run the evaluation code with a corresponding config file to reproduce the performance in the tables of our paper.
# dataset: 3dpw, mpii3d, h36m
python evaluate.py --dataset 3dpw --cfg ./configs/repr_table4_3dpw_model.yaml --gpu 0
- You may test options such as average filtering and rendering. See the bottom lines of
${ROOT}/lib/core/config.py
. - We checked rendering results of TCMR on 3DPW validation and test sets.
Reproduction (Training)
- Run the training code with a corresponding config file to reproduce the performance in the tables of our paper.
- There is a hard coding related to the config file's name. Please use the exact config file to reproduce, instead of changing the content of the default config file.
# training outputs are saved in `experiments` directory
# mkdir experiments
python train.py --cfg ./configs/repr_table4_3dpw_model.yaml --gpu 0
- After the training, the checkpoints are saved in
${ROOT}/experiments/{date_of_training}/
. Change the config file'sTRAIN.PRETRAINED
with the checkpoint path (eithercheckpoint.pth.tar
ormodel_best.pth.tar
) and follow the evaluation command. - You may test the motion discriminator introduced in VIBE by uncommenting the codes that have
exclude motion discriminator
notations.
Reference
@InProceedings{choi2020beyond,
title={Beyond Static Features for Temporally Consistent 3D Human Pose and Shape from a Video},
author={Choi, Hongsuk and Moon, Gyeongsik and Chang, Ju Yong and Lee, Kyoung Mu},
booktitle = {Conference on Computer Vision and Pattern Recognition (CVPR)}
year={2021}
}
License
This project is licensed under the terms of the MIT license.
Related Projects
I2L-MeshNet_RELEASE
3DCrowdNet_RELEASE
TCMR_RELEASE
Hand4Whole_RELEASE
HandOccNet
NeuralAnnot_RELEASE