Home

Awesome

Distribution Aware Coordinate Representation for Human Pose Estimation

<p align="center"> <b><i>Serving as a model-agnostic plug-in, DARK significantly improves the performance of a variety of state-of-the-art human pose estimation models! </i></b> </p>

News

Introduction

    This work fills the gap by studying the coordinate representation with a particular focus on the heatmap. We formulate a novel Distribution-Aware coordinate Representation of Keypoint (DARK) method. Serving as a model-agnostic plug-in, DARK significantly improves the performance of a variety of state-of-the-art human pose estimation models!

Illustrating the architecture of the proposed DARK

Our CVPR2019 work Fast Human Pose Estimation can work seamlessly with DARK, which is available at Github

Main Results

Results on COCO val2017 with detector having human AP of 56.4 on COCO val2017 dataset

BaselineInput size#ParamsGFLOPsAPAp .5AP .75AP (M)AP (L)AR
Hourglass(4 Blocks)128×9613.0M2.766.287.675.163.871.472.8
Hourglass(4 Blocks) + DARK128×9613.0M2.769.687.877.067.075.475.7
Hourglass(8 Blocks)128×9625.1M4.967.688.377.465.273.074.0
Hourglass(8 Blocks) + DARK128×9625.1M4.970.887.978.368.376.476.6
SimpleBaseline-R50128×9634.0M2.359.385.567.457.863.866.6
SimpleBaseline-R50 + DARK128×9634.0M2.362.686.170.460.467.969.5
SimpleBaseline-R101128×9653.0M3.158.885.366.157.363.466.1
SimpleBaseline-R101 + DARK128×9653.0M3.163.286.271.161.268.570.0
SimpleBaseline-R152128×9668.6M3.960.786.069.659.065.468.0
SimpleBaseline-R152 + DARK128×9668.6M3.963.186.271.661.368.170.0
HRNet-W32128×9628.5M1.866.988.776.364.672.373.7
HRNet-W32 + DARK128×9628.5M1.870.788.978.467.976.676.7
HRNet-W48128×9663.6M3.668.088.977.465.773.774.7
HRNet-W48 + DARK128×9663.6M3.671.989.179.669.278.077.9
HRNet-W32256×19228.5M7.174.490.581.970.881.079.8
HRNet-W32 + DARK256×19228.5M7.175.690.582.171.882.880.8
HRNet-W32384×28828.5M16.075.890.682.572.082.780.9
HRNet-W32 + DARK384×28828.5M16.076.690.782.872.783.981.5
HRNet-W48384×28863.6M32.976.390.882.972.383.481.2
HRNet-W48 + DARK384×28863.6M32.976.890.683.272.884.081.7

Note:

Results on COCO test-dev2017 with detector having human AP of 60.9 on COCO test-dev2017 dataset

BaselineInput size#ParamsGFLOPsAPAp.5AP.75AP(M)AP(L)AR
HRNet-W48384x28863.6M32.975.592.583.371.981.580.5
HRNet-W48 + DARK384x28863.6M32.976.292.583.672.582.481.1
HRNet-W48*384x28863.6M32.977.092.784.573.483.182.0
HRNet-W48 + DARK*384x28863.6M32.977.492.684.673.683.782.3
HRNet-W48 + DARK*-384x28863.6M32.978.293.585.574.484.283.5
HRNet-W48 + DARK*-+384x28863.6M32.978.993.886.075.184.483.5

Note:

Results on MPII val

PCKhBaselineHeadShoulderElbowWristHipKneeAnkleMean
0.5HRNet_w3297.195.990.386.589.187.183.390.3
0.5HRNet_w32 + DARK97.295.991.286.789.786.784.090.6
0.1HRNet_w3251.142.742.041.617.929.931.037.7
0.1HRNet_w32 + DARK55.247.847.445.220.133.435.442.0

Note:

Quick start

1. Preparation

1.1 Prepare the dataset

For the MPII dataset, the original annotation files are in matlab format. We have converted them into json format, you also need to download them from OneDrive or GoogleDrive. Extract them under {POSE_ROOT}/data, your directory tree should look like this:

${POSE_ROOT}/data/mpii
├── images
└── mpii_human_pose_v1_u12_1.mat
|—— annot
|   |—— gt_valid.mat
└── |—— test.json
    |   |—— train.json
    |   |—— trainval.json
    |   |—— valid.json
    └── images
        |—— 000001163.jpg
        |—— 000003072.jpg

For the COCO dataset, your directory tree should look like this:

${POSE_ROOT}/data/coco
├── annotations
├── images
│   ├── test2017
│   ├── train2017
│   └── val2017
└── person_detection_results

1.2 Download the pretrained models

Pretrained models are provided.

1.3 Prepare the environment

Setting the parameters in the file prepare_env.sh as follows:

# DATASET_ROOT=$HOME/datasets
# COCO_ROOT=${DATASET_ROOT}/MSCOCO
# MPII_ROOT=${DATASET_ROOT}/MPII
# MODELS_ROOT=${DATASET_ROOT}/models

Then execute:

bash prepare_env.sh

If you like, you can prepare the environment step by step

Citation

If you use our code or models in your research, please cite with:

@InProceedings{Zhang_2020_CVPR,
    author = {Zhang, Feng and Zhu, Xiatian and Dai, Hanbin and Ye, Mao and Zhu, Ce},
    title = {Distribution-Aware Coordinate Representation for Human Pose Estimation},
    booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month = {June},
    year = {2020}
}

Acknowledgement

Thanks for the open-source HRNet