Awesome
RMPE: Regional Multi-person Pose Estimation
By Hao-Shu Fang, Shuqin Xie, Yu-Wing Tai, Cewu Lu.
New Version AlphaPose based on RMPE is released! It achieves 20 fps and is 10 mAP more accurate than this repo! Check out https://github.com/MVIG-SJTU/AlphaPose/tree/pytorch
Introduction
RMPE is a two steps framework for the task of multi-person pose estimation. You can use the code to train/evaluate a model for pose estimation task. For more details, please refer to our arxiv paper.
<p align="center"> <img src="https://github.com/fang-haoshu/RMPE/blob/master/readme/new-framework.jpg" alt="RMPE Framework" width="600px"> </p>Results
<p align="left"> <img src="https://github.com/Fang-Haoshu/RMPE/blob/master/readme/demo.gif", width="720"> </p>Video results available here
Results on MPII dataset:
<center>Method | MPII full test mAP | s/frame |
---|---|---|
Iqbal & Gall, ECCVw'16 | 43.1 | 10 |
DeeperCut, ECCV16 | 59.5 | 485 |
RMPE | 76.7 | 1.5 |
Results on COCO test-dev 2015:
<center>Method | AP @0.5:0.95 | AP @0.5 | AP @0.75 |
---|---|---|---|
RMPE | 61.8 | 83.7 | 69.8 |
Contents
Installation
- Get the code. We will call the directory that you cloned Caffe into
$CAFFE_ROOT
git clone https://github.com/fang-haoshu/rmpe.git
cd rmpe
- Build the code. Please follow Caffe instruction to install all necessary packages and build it.
# Modify Makefile.config according to your Caffe installation.
cp Makefile.config.example Makefile.config
make -j8
# Make sure to include $CAFFE_ROOT/python to your PYTHONPATH.
make py
make test -j8
make runtest -j8
# If you have multiple GPUs installed in your machine, make runtest might fail. If so, try following:
export CUDA_VISIBLE_DEVICES=0; make runtest -j8
# If you have error: "Check failed: error == cudaSuccess (10 vs. 0) invalid device ordinal",
# first make sure you have the specified GPUs, or try following if you have multiple GPUs:
unset CUDA_VISIBLE_DEVICES
Preparation
For demo only
- Download pre-trained human detector(Google drive|Baidu cloud) and SPPE+SSTN caffe model(Google drive|Baidu cloud). By default, we assume the models are stored in
$CAFFE_ROOT/models/VGG_SSD/
and$CAFFE_ROOT/models/SPPE/
accordingly.
For train/eval
This part of our model is implemented in Torch7. Please refer to this repo for more details.
Demo
Our experiments use both Caffe and Torch7. But we implement the whole framework in Caffe so you can run the demo easily. Note: The current caffe model of SPPE use the 2-stacked hourglass network which has a lower precision. We will be grateful if anyone can help to transfer new torch model to caffe.
- Run the ipython notebook. It will show you how our whole framework works
cd $CAFFE_ROOT
# it shows how our framework works
jupyter notebook examples/rmpe/Regional\ Multi-person\ Pose\ Estimation.ipynb
- Run the python program for more results
python examples/rmpe/demo.py
Train/Eval
-
Train SPPE+SSTN. This part of our model is implemented in Torch7. Please refer to this repo for more details. We will call the directory that you cloned the repo into
$SPPE_ROOT
. I have written an implementation in Caffe. You can email me for the script. -
Evaluate the model. You can modify line 45 in
demo.py
to evaluate our framework on whole test set. But the results will be different. To reproduce our results reported in our paper:
# First get the result of human detector
cd $CAFFE_ROOT
jupyter notebook examples/rmpe/human_detection.ipynb
# Then move the results to $SPPE_ROOT/predict/annot/
mv examples/rmpe/mpii-test0.09 $SPPE_ROOT/predict/annot/
# Next, do single person human estimation
cd $SPPE_ROOT/predict
th main.lua predict-test
#Finally, do pose NMS
python batch_nms.py
#our result is stored in txt format, to evaluate, Download MPII toolkit and put it in current directory
matlab
#In matlab
setpred()
Citation
Please cite the paper in your publications if it helps your research:
@inproceedings{fang2017rmpe,
title={{RMPE}: Regional Multi-person Pose Estimation},
author={Fang, Hao-Shu and Xie, Shuqin and Tai, Yu-Wing and Lu, Cewu},
booktitle={ICCV},
year={2017}
}
Acknowledgements
Thanks to Wei Liu, Alejandro Newell, Pfister, T., Kaichun Mo, Maxime Oquab for contributing their codes. Thanks to the authors of Caffe and Torch7!