Home

Awesome

<div align="center"> <h1 align="center">3D Human Mesh Estimation from Virtual Markers <br> (CVPR 2023)</h1> </div> <div align="left">

<a>Python 3.8+</a> <a href="https://pytorch.org/get-started/locally/"><img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-ee4c2c?logo=pytorch&logoColor=white"></a> <a href="https://github.com/ShirleyMaxx/VirtualMarker/blob/main/LICENSE">License</a> arXiv PWC

</div> <p align="center"> <img src="demo/quality_results.png"/> </p> <p align="middle"> <img src="demo/demo_result1.gif" height="120" /> <img src="demo/demo_result2.gif" height="120" /> <img src="demo/demo_result3.gif" height="120" /> <img src="demo/demo_result4.gif" height="120" /> <img src="demo/demo_result5.gif" height="120" /> </p>

Introduction

This is the offical Pytorch implementation of our paper:

<h3 align="center">3D Human Mesh Estimation from Virtual Markers (CVPR 2023)</h3> <h4 align="center" style="text-decoration: none;"> <a href="https://shirleymaxx.github.io/", target="_blank"><b>Xiaoxuan Ma</b></a> , <a href="https://scholar.google.com/citations?user=DoUvUz4AAAAJ&hl=en", target="_blank"><b>Jiajun Su</b></a> , <a href="https://www.chunyuwang.org/", target="_blank"><b>Chunyu Wang</b></a> , <a href="https://wentao.live/", target="_blank"><b>Wentao Zhu</b></a> , <a href="https://cfcs.pku.edu.cn/english/people/faculty/yizhouwang/index.htm", target="_blank"><b>Yizhou Wang</b></a> </h4> <h4 align="center"> <a href="https://shirleymaxx.github.io/virtual_marker/", target="_blank">[project page]</a> / <a href="https://www.youtube.com/watch?v=je2gNUiYl2c", target="_blank">[video]</a> / <a href="https://arxiv.org/pdf/2303.11726.pdf", target="_blank">[arXiv]</a> / <a href="https://openaccess.thecvf.com/content/CVPR2023/papers/Ma_3D_Human_Mesh_Estimation_From_Virtual_Markers_CVPR_2023_paper.pdf", target="_blank">[paper]</a> / <a href="https://openaccess.thecvf.com/content/CVPR2023/supplemental/Ma_3D_Human_Mesh_CVPR_2023_supplemental.pdf", target="_blank">[supplementary]</a> </h4>

Below is the learned virtual markers and the overall framework.

<p align="center"> <img src="demo/virtualmarker.gif" height="160" /> <img src="demo/pipeline.png" height="160" /> </p>

News :triangular_flag_on_post:

[2023/05/21] Project page with more demos.

[2023/04/23] Demo code released!

TODO :white_check_mark:

Installation

  1. Install dependences. This project is developed using >= python 3.8 on Ubuntu 16.04. NVIDIA GPUs are needed. We recommend you to use an Anaconda virtual environment.
  # 1. Create a conda virtual environment.
  conda create -n pytorch python=3.8 -y
  conda activate pytorch

  # 2. Install PyTorch >= v1.6.0 following [official instruction](https://pytorch.org/). Please adapt the cuda version to yours.
  pip install torch==1.7.1+cu101 torchvision==0.8.2+cu101 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html

  # 3. Pull our code.
  git clone https://github.com/ShirleyMaxx/VirtualMarker.git
  cd VirtualMarker

  # 4. Install other packages. This project doesn't have any special or difficult-to-install dependencies.
  sh requirements.sh

  #5. Install Virtual Marker
  python setup.py develop
  1. Prepare SMPL layer. We use smplx.

    1. Install smplx package by pip install smplx. Already done in the first step.
    2. Download basicModel_f_lbs_10_207_0_v1.0.0.pkl, basicModel_m_lbs_10_207_0_v1.0.0.pkl, and basicModel_neutral_lbs_10_207_0_v1.0.0.pkl from here (female & male) and here (neutral) to ${Project}/data/smpl. Please rename them as SMPL_FEMALE.pkl, SMPL_MALE.pkl, and SMPL_NEUTRAL.pkl, respectively.
    3. Download others SMPL-related from Google drive or Onedrive and put them to ${Project}/data/smpl.
  2. Download data following the Data section. In summary, your directory tree should be like this

  ${Project}
  ├── assets
  ├── command
  ├── configs
  ├── data 
  ├── demo 
  ├── experiment 
  ├── inputs 
  ├── virtualmarker 
  ├── main 
  ├── models 
  ├── README.md
  ├── setup.py
  `── requirements.sh

Quick demo :star:

  1. Installation. Make sure you have finished the above installation successfully. VirtualMarker does not detect person and only estimates relative pose and mesh, therefore please also install VirtualPose following its instructions. VirtualPose will detect all the person and estimate their root depths. Download its model weight from Google drive or Onedrive and put it under VirtualPose.
git clone https://github.com/wkom/VirtualPose.git
cd VirtualPose
python setup.py develop
  1. Render Env. If you run this code in ssh environment without display device, please do follow:
1. Install osmesa follow https://pyrender.readthedocs.io/en/latest/install/
2. Reinstall the specific pyopengl fork: https://github.com/mmatl/pyopengl
3. Set opengl's backend to osmesa via os.environ["PYOPENGL_PLATFORM"] = "osmesa"
  1. Model weight. Download the pre-trained VirtualMarker models baseline_mix from Google drive or Onedrive. Put the weight below experiment folder and follow the directory structure. Specify the load weight path by test.weight_path in configs/simple3dmesh_infer/baseline.yml.

  2. Input image/video. Prepare input.jpg or input.mp4 and put it at inputs folder. Both image and video input are supported. Specify the input path and type by arguments.

  3. RUN. You can check the output at experiment/simple3dmesh_infer/exp_*/vis.

sh command/simple3dmesh_infer/baseline.sh

Train & Eval

Data

The data directory structure should follow the below hierarchy. Please download the images from the official sites. Download all the processed annotation files from Google drive or Onedrive.

${Project}
|-- data
    |-- 3DHP
    |   |-- annotations
    |   `-- images
    |-- COCO
    |   |-- annotations
    |   `-- images
    |-- Human36M
    |   |-- annotations
    |   `-- images
    |-- PW3D
    |   |-- annotations
    |   `-- images
    |-- SURREAL
    |   |-- annotations
    |   `-- images
    |-- Up_3D
    |   |-- annotations
    |   `-- images
    `-- smpl
        |-- smpl_indices.pkl
        |-- SMPL_FEMALE.pkl
        |-- SMPL_MALE.pkl
        |-- SMPL_NEUTRAL.pkl
        |-- mesh_downsampling.npz
        |-- J_regressor_extra.npy
        `-- J_regressor_h36m_correct.npy

Train

Every experiment is defined by config files. Configs of the experiments in the paper can be found in the ./configs directory. You can use the scripts under command to run.

To train the model, simply run the script below. Specific configurations can be modified in the corresponding configs/simple3dmesh_train/baseline.yml file. Default setting is using 4 GPUs (16G V100). Multi-GPU training is implemented with PyTorch's DataParallel. Results can be seen in experiment directory or in the tensorboard.

We conduct mix-training on H3.6M and 3DPW datasets. To get the reported results on 3DPW dataset, please first run train_h36m.sh and then load the final weight to train on 3DPW by running train_pw3d.sh. This finetuning strategy is for faster training and better performance. We train a seperate model on SURREAL dataset using train_surreal.sh.

sh command/simple3dmesh_train/train_h36m.sh
sh command/simple3dmesh_train/train_pw3d.sh
sh command/simple3dmesh_train/train_surreal.sh

Evaluation

To evaluate the model, specify the model path test.weight_path in configs/simple3dmesh_test/baseline_*.yml. Argument --mode test should be set. Results can be seen in experiment directory or in the tensorboard.

sh command/simple3dmesh_test/test_h36m.sh
sh command/simple3dmesh_test/test_pw3d.sh
sh command/simple3dmesh_test/test_surreal.sh

Model Zoo

Test setMPVEMPJPEPA-MPJPEModel weightConfig
Human3.6M58.047.332.0Google drive / Onedrivecfg
3DPW77.967.541.3Google drive / Onedrivecfg
SURREAL44.736.928.9Google drive / Onedrivecfg
in-the-wild*Google drive / Onedrive

* We further train a model for better inference performance on in-the-wild scenes by finetuning the 3DPW model on SURREAL dataset.

Citation

Cite as below if you find this repository is helpful to your project:

@InProceedings{Ma_2023_CVPR,
    author    = {Ma, Xiaoxuan and Su, Jiajun and Wang, Chunyu and Zhu, Wentao and Wang, Yizhou},
    title     = {3D Human Mesh Estimation From Virtual Markers},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2023},
    pages     = {534-543}
}

Acknowledgement

This repo is built on the excellent work GraphCMR, SPIN, Pose2Mesh, HybrIK and CLIFF. Thanks for these great projects.