Home

Awesome

<div align="center"> <img src="doc/logo.jpg", width="400"> </div>

AlphaPose

Alpha Pose is an accurate multi-person pose estimator, which is the first real-time open-source system that achieves 70+ mAP (72.3 mAP) on COCO dataset and 80+ mAP (82.1 mAP) on MPII dataset.** To match poses that correspond to the same person across frames, we also provide an efficient online pose tracker called Pose Flow. It is the first open-source online pose tracker that achieves both 60+ mAP (66.5 mAP) and 50+ MOTA (58.3 MOTA) on PoseTrack Challenge dataset.

News!

Contents

Results

Pose Estimation

<p align="center"> <img src="doc/pose.gif", width="360"> </p>

Results on COCO test-dev 2015:

<center>
MethodAP @0.5:0.95AP @0.5AP @0.75AP mediumAP large
OpenPose (CMU-Pose)61.884.967.557.168.2
Detectron (Mask R-CNN)67.088.073.162.275.6
AlphaPose72.389.279.169.078.6
</center>

Results on MPII full test set:

<center>
MethodHeadShoulderElbowWristHipKneeAnkleAve
OpenPose (CMU-Pose)91.287.677.766.875.468.961.775.6
Newell & Deng92.189.378.969.876.271.664.777.5
AlphaPose91.390.584.076.480.379.972.482.1
</center>

Pose Tracking

<p align='center'> <img src="doc/posetrack.gif", width="360"> <img src="doc/posetrack2.gif", width="344"> </p>

Results on PoseTrack Challenge validation set:

  1. Task2: Multi-Person Pose Estimation (mAP)
<center>
MethodHead mAPShoulder mAPElbow mAPWrist mAPHip mAPKnee mAPAnkle mAPTotal mAP
Detect-and-Track(FAIR)67.570.26251.760.758.749.860.6
AlphaPose66.773.368.361.167.567.061.366.5
</center>
  1. Task3: Pose Tracking (MOTA)
<center>
MethodHead MOTAShoulder MOTAElbow MOTAWrist MOTAHip MOTAKnee MOTAAnkle MOTATotal MOTATotal MOTPSpeed(FPS)
Detect-and-Track(FAIR)61.765.557.345.754.353.145.755.261.5Unknown
PoseFlow(DeepMatch)59.867.059.851.660.058.450.558.367.88
PoseFlow(OrbMatch)59.066.860.051.859.458.450.358.062.224
</center>

Note: Please read PoseFlow/README.md for details.

CrowdPose

<p align='center'> <img src="doc/crowdpose.gif", width="360"> </p>

Results on CrowdPose Validation:

Compare with state-of-the-art methods

<center>
MethodAP @0.5:0.95AP @0.5AP @0.75AR @0.5:0.95AR @0.5AR @0.75
Detectron (Mask R-CNN)57.283.560.365.989.369.4
Simple Pose (Xiao et al.)60.881.465.767.386.371.8
Ours66.084.271.572.789.577.5
</center>

Compare with open-source systems

<center>
MethodAP @EasyAP @MediumAP @HardFPS
OpenPose (CMU-Pose)62.748.732.35.3
Detectron (Mask R-CNN)69.457.945.82.9
Ours (PyTorch branch)75.566.357.410.1

Note: Please read doc/CrowdPose.md for details.

Installation

Note: For new users or users that are not familiar with TensorFlow or Torch, we suggest using the PyTorch version since it's more user-friendly and runs faster.

  1. Get the code and build related modules.
git clone https://github.com/MVIG-SJTU/AlphaPose.git
cd AlphaPose/human-detection/lib/
make clean
make
cd newnms/
make
cd ../../../
  1. Install Torch and TensorFlow(verson >= 1.2). After that, install related dependencies by:
chmod +x install.sh
./install.sh
  1. Run fetch_models.sh to download our pre-trained models. Or download the models manually: output.zip(Google drive|Baidu pan), final_model.t7(Google drive|Baidu pan)
chmod +x fetch_models.sh
./fetch_models.sh

Quick Start

./run.sh --indir examples/demo/ --outdir examples/results/ --vis

The visualized results will be stored in examples/results/RENDER. To easily process images/video and display/save the results, please see doc/run.md. If you get any problems, you can check the doc/faq.md.

Output

Output (format, keypoint index ordering, etc.) in doc/output.md.

Speeding Up AlphaPose

We provide a fast mode for human-detection that disables multi-scale testing. You can turn it on by adding --mode fast.

And if you have multiple gpus on your machine or have large gpu memories, you can speed up the pose estimation step by using multi-gpu testing or large batch tesing with:

./run.sh --indir examples/demo/ --outdir examples/results/ --gpu 0,1,2,3 --batch 5

It assumes that you have 4 gpu cards on your machine and each card can run a batch of 5 images. Here is the recommended batch size for gpu with different size of memory:

GPU memory: 4GB -- batch size: 3
GPU memory: 8GB -- batch size: 6
GPU memory: 12GB -- batch size: 9

See doc/run.md for more details.

Feedbacks

If you get any problems, you can check the doc/faq.md first. If it can not solve your problems or if you find any bugs, don't hesitate to comment on GitHub or make a pull request!

Contributors

AlphaPose is based on RMPE(ICCV'17), authored by Hao-shu Fang, Shuqin Xie, Yu-Wing Tai and Cewu Lu, Cewu Lu is the corresponding author. Currently, it is developed and maintained by Hao-shu Fang, Jiefeng Li, Yuliang Xiu and Ruiheng Chang.

The main contributors are listed in doc/contributors.md.

Citation

Please cite these papers in your publications if it helps your research:

@inproceedings{fang2017rmpe,
  title={{RMPE}: Regional Multi-person Pose Estimation},
  author={Fang, Hao-Shu and Xie, Shuqin and Tai, Yu-Wing and Lu, Cewu},
  booktitle={ICCV},
  year={2017}
}

@inproceedings{xiu2018poseflow,
  title = {{Pose Flow}: Efficient Online Pose Tracking},
  author = {Xiu, Yuliang and Li, Jiefeng and Wang, Haoyu and Fang, Yinghong and Lu, Cewu},
  booktitle={BMVC},
  year = {2018}
}

License

AlphaPose is freely available for free non-commercial use, and may be redistributed under these conditions. For commercial queries, please drop an e-mail at mvig.alphapose[at]gmail[dot]com and cc lucewu[[at]sjtu[dot]edu[dot]cn. We will send the detail agreement to you.