Home

Awesome

<div align="center"> <img src="docs/logo.jpg", width="400"> </div>

News!

AlphaPose

AlphaPose is an accurate multi-person pose estimator, which is the first open-source system that achieves 70+ mAP (75 mAP) on COCO dataset and 80+ mAP (82.1 mAP) on MPII dataset. To match poses that correspond to the same person across frames, we also provide an efficient online pose tracker called Pose Flow. It is the first open-source online pose tracker that achieves both 60+ mAP (66.5 mAP) and 50+ MOTA (58.3 MOTA) on PoseTrack Challenge dataset.

AlphaPose supports both Linux and Windows!

<div align="center"> <img src="docs/alphapose_17.gif", width="400" alt><br> COCO 17 keypoints </div> <div align="center"> <img src="docs/alphapose_26.gif", width="400" alt><br> <b><a href="https://github.com/Fang-Haoshu/Halpe-FullBody">Halpe 26 keypoints</a></b> + tracking </div> <div align="center"> <img src="docs/alphapose_136.gif", width="400"alt><br> <b><a href="https://github.com/Fang-Haoshu/Halpe-FullBody">Halpe 136 keypoints</a></b> + tracking <b><a href="https://youtu.be/uze6chg-YeU">YouTube link</a></b><br> </div> <div align="center"> <img src="docs/alphapose_hybrik_smpl.gif", width="400"alt><br> <b><a href="https://github.com/Jeff-sjtu/HybrIK">SMPL</a></b> + tracking </div>

Results

Pose Estimation

Results on COCO test-dev 2015:

<center>
MethodAP @0.5:0.95AP @0.5AP @0.75AP mediumAP large
OpenPose (CMU-Pose)61.884.967.557.168.2
Detectron (Mask R-CNN)67.088.073.162.275.6
AlphaPose73.389.279.169.078.6
</center>

Results on MPII full test set:

<center>
MethodHeadShoulderElbowWristHipKneeAnkleAve
OpenPose (CMU-Pose)91.287.677.766.875.468.961.775.6
Newell & Deng92.189.378.969.876.271.664.777.5
AlphaPose91.390.584.076.480.379.972.482.1
</center>

More results and models are available in the docs/MODEL_ZOO.md.

Pose Tracking

<p align='center'> <img src="docs/posetrack.gif", width="360"> <img src="docs/posetrack2.gif", width="344"> </p>

Please read trackers/README.md for details.

CrowdPose

<p align='center'> <img src="docs/crowdpose.gif", width="360"> </p>

Please read docs/CrowdPose.md for details.

Installation

Please check out docs/INSTALL.md

Model Zoo

Please check out docs/MODEL_ZOO.md

Quick Start

./scripts/inference.sh ${CONFIG} ${CHECKPOINT} ${VIDEO_NAME} # ${OUTPUT_DIR}, optional

Inference SMPL (Download the SMPL model basicModel_neutral_lbs_10_207_0_v1.0.0.pkl from here and put it in model_files/).

./scripts/inference_3d.sh ./configs/smpl/256x192_adam_lr1e-3-res34_smpl_24_3d_base_2x_mix.yaml ${CHECKPOINT} ${VIDEO_NAME} # ${OUTPUT_DIR}, optional

For high level API, please refer to ./scripts/demo_api.py. To enable tracking, please refer to this page.

./scripts/train.sh ${CONFIG} ${EXP_ID}
./scripts/validate.sh ${CONFIG} ${CHECKPOINT}

Examples:

Demo using FastPose model.

./scripts/inference.sh configs/coco/resnet/256x192_res50_lr1e-3_1x.yaml pretrained_models/fast_res50_256x192.pth ${VIDEO_NAME}
#or
python scripts/demo_inference.py --cfg configs/coco/resnet/256x192_res50_lr1e-3_1x.yaml --checkpoint pretrained_models/fast_res50_256x192.pth --indir examples/demo/
#or if you want to use yolox-x as the detector
python scripts/demo_inference.py --detector yolox-x --cfg configs/coco/resnet/256x192_res50_lr1e-3_1x.yaml --checkpoint pretrained_models/fast_res50_256x192.pth --indir examples/demo/

Train FastPose on mscoco dataset.

./scripts/train.sh ./configs/coco/resnet/256x192_res50_lr1e-3_1x.yaml exp_fastpose

More detailed inference options and examples, please refer to GETTING_STARTED.md

Common issue & FAQ

Check out faq.md for faq. If it can not solve your problems or if you find any bugs, don't hesitate to comment on GitHub or make a pull request!

Contributors

AlphaPose is based on RMPE(ICCV'17), authored by Hao-Shu Fang, Shuqin Xie, Yu-Wing Tai and Cewu Lu, Cewu Lu is the corresponding author. Currently, it is maintained by Jiefeng Li*, Hao-shu Fang*, Haoyi Zhu, Yuliang Xiu and Chao Xu.

The main contributors are listed in doc/contributors.md.

TODO

We would really appreciate if you can offer any help and be the contributor of AlphaPose.

Citation

Please cite these papers in your publications if it helps your research:

@article{alphapose,
  author = {Fang, Hao-Shu and Li, Jiefeng and Tang, Hongyang and Xu, Chao and Zhu, Haoyi and Xiu, Yuliang and Li, Yong-Lu and Lu, Cewu},
  journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
  title = {AlphaPose: Whole-Body Regional Multi-Person Pose Estimation and Tracking in Real-Time},
  year = {2022}
}

@inproceedings{fang2017rmpe,
  title={{RMPE}: Regional Multi-person Pose Estimation},
  author={Fang, Hao-Shu and Xie, Shuqin and Tai, Yu-Wing and Lu, Cewu},
  booktitle={ICCV},
  year={2017}
}

@inproceedings{li2019crowdpose,
    title={Crowdpose: Efficient crowded scenes pose estimation and a new benchmark},
    author={Li, Jiefeng and Wang, Can and Zhu, Hao and Mao, Yihuan and Fang, Hao-Shu and Lu, Cewu},
    booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
    pages={10863--10872},
    year={2019}
}

If you used the 3D mesh reconstruction module, please also cite:

@inproceedings{li2021hybrik,
    title={Hybrik: A hybrid analytical-neural inverse kinematics solution for 3d human pose and shape estimation},
    author={Li, Jiefeng and Xu, Chao and Chen, Zhicun and Bian, Siyuan and Yang, Lixin and Lu, Cewu},
    booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
    pages={3383--3393},
    year={2021}
}

If you used the PoseFlow tracking module, please also cite:

@inproceedings{xiu2018poseflow,
  author = {Xiu, Yuliang and Li, Jiefeng and Wang, Haoyu and Fang, Yinghong and Lu, Cewu},
  title = {{Pose Flow}: Efficient Online Pose Tracking},
  booktitle={BMVC},
  year = {2018}
}

License

AlphaPose is freely available for free non-commercial use, and may be redistributed under these conditions. For commercial queries, please drop an e-mail at mvig.alphapose[at]gmail[dot]com and cc lucewu[[at]sjtu[dot]edu[dot]cn. We will send the detail agreement to you.