Awesome
Human Mesh Recovery from Multiple Shots
Code repository for the paper:
Human Mesh Recovery from Multiple Shots
Georgios Pavlakos, Jitendra Malik, Angjoo Kanazawa
CVPR 2022
[paper] [project page]
Installation instructions
We recommend creating a virtual environment:
python3.6 -m venv .mshot
source .mshot/bin/activate
or alternatively a conda environment:
conda create -n mshot python=3.6
conda activate mshot
and then installing all dependencies:
pip install -r requirements.txt
python setup.py develop
Data download and preprocessing
To download the data for AVA, you can run:
python prepare_ava.py --download --extract_midframes
This script will download the movies from the CVDF repository and extract the relevant frames from the video files.
Moreover, you will need to download some additional data. This include the pseudo ground truth for AVA, as well as some files that will be helpful for running the code. You can find them here. Please download and unzip the folder.
Additionally, you will need to download the SMPL model (the neutral model is used in most cases). You cam put this under mshot_data/models/smpl
. Also, you will need the GMM prior. You can put the file under mshot_data/priors
.
Finally, for training purposes, you will need to download the necessary datasets. The instructions are mostly common with the description here.
Demo
To run a demo of our fitting code, we recommend using the PHALP tracking method. This will output the necessary files to run our multi-shot optimization, including tracklet information and initialization. Based on the PHALP output, we provide a script that post-processes it and allows us to run the multi-shot optimization. This also needs detections from OpenPose.
python3 optimization/process_phalp.py --phalp_output /path/to/phalp/output \
--phalp_demo /path/to/phalp/demo/output \
--openpose_output /path/to/openpose/detections \
--output_npz output/demo/phalp.npz \
--tracklet_id 1
Given that output, we can run our optimization code.
python3 optimization/main.py --config optimization/fit_smpl.yaml \
--batch_size 16 \
--npz output/demo/phalp.npz \
--output_folder output/demo/
To visualize results, you can run:
python3 optimization/render_fittings.py --config optimization/fit_smpl.yaml \
--batch_size 1 \
--npz output/demo/phalp.npz \
--output_folder output/demo
Alternatively, you can run a demo of the regression models. This again assumes the output from PHALP has already been produced:
python3 regression/demo.py --npz output/demo/phalp.npz \
--output_folder output/demo
Training code
To train an HMR model with the AVA data, you can run:
python regression/train.py --name=train_hmr --new_experiment_config=regression/configs/train_hmr.yaml --saved_experiment_dir=experiments_regression
To train a t-HMMR model with the AVA data, you can run:
python regression/train.py --name=train_thmmr --new_experiment_config=regression/configs/train_thmmr.yaml --saved_experiment_dir=experiments_regression
Acknowledgements
Parts of the code are taken or adapted from the following repos:
Citing
If you find this code useful for your research or the use data generated by our method, please consider citing the following paper:
@Inproceedings{pavlakos2022multishot,
Title = {Human Mesh Recovery from Multiple Shots},
Author = {Pavlakos, Georgios and Malik, Jitendra and Kanazawa, Angjoo},
Booktitle = {CVPR},
Year = {2022}
}