Home

Awesome

Human Pose Estimation with Parsing Induced Learner

This repository contains the code and pretrained models of

Human Pose Estimation with Parsing Induced Learner [PDF]
Xuecheng Nie, Jiashi Feng, Yiming Zuo, and Shuicheng Yan
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018

Prerequisites

Installation

  1. Install Pytorch: Please follow the official instruction on installation of Pytorch.
  2. Clone the repository
    git clone --recursive https://github.com/NieXC/pytorch-pil.git
    
  3. Download Look into Person (LIP) dataset and create symbolic links to the following directories
    ln -s PATH_TO_LIP_TRAIN_IMAGES_DIR dataset/lip/train_images   
    ln -s PATH_TO_LIP_VAL_IMAGES_DIR dataset/lip/val_images      
    ln -s PATH_TO_LIP_TEST_IMAGES_DIR dataset/lip/testing_images   
    ln -s PATH_TO_LIP_TRAIN_SEGMENTATION_ANNO_DIR dataset/lip/train_segmentations   
    ln -s PATH_TO_LIP_VAL_SEGMENTATION_ANNO_DIR dataset/lip/val_segmentations   
    

Usage

Training

Run the following command to train the model from scratch (Default: 8-stack of Hourglass network as pose network and 1-stack of Hourglass network as parsing induced learner):

sh run_train.sh

or

CUDA_VISIBLE_DEVICES=0,1 python main.py -b 24 --lr 0.0015

A simple way to record the training log by adding the following command:

2>&1 | tee exps/logs/pil_lip.log

Some configurable parameters in training phase:

Testing

Run the following command to evaluate the model on LIP validation set:

sh run_test.sh

or

CUDA_VISIBLE_DEVICES=0 python main.py --evaluate True --calc-pck True --resume exps/snapshots/pil_lip_best.pth.tar

Run the following command to evaluate the model on LIP testing set:

CUDA_VISIBLE_DEVICES=0 python main.py --evaluate True --resume exps/snapshots/pil_lip_best.pth.tar --eval-data dataset/lip/testing_images --eval-anno dataset/lip/jsons/LIP_SP_TEST_annotations.json

In particular, results will be saved as a .csv file followed the official evaluation format of LIP dataset for single-person human pose estimation. An example is provided in exps/preds/csv_results/pred_keypoints_lip.csv.

Some configurable parameters in testing phase:

Citation

If you use our code in your work or find it is helpful, please cite the paper:

@inproceedings{nie2018pil,
  title={Human Pose Estimation with Parsing Induced Learner},
  author={Nie, Xuecheng and Feng, Jiashi and Zuo, Yiming and Yan, Shuicheng},
  booktitle={CVPR},
  year={2018}
}