Home

Awesome

Mutual Learning to Adapt for Joint Human Parsing and Pose Estimation

This repository contains the code and pretrained models of

Mutual Learning to Adapt for Joint Human Parsing and Pose Estimation [PDF]
Xuecheng Nie, Jiashi Feng, and Shuicheng Yan
European Conference on Computer Vision (ECCV), 2018

Prerequisites

Installation

  1. Install Pytorch: Please follow the official instruction on installation of Pytorch.
  2. Clone the repository
    git clone --recursive https://github.com/NieXC/pytorch-mula.git
    
  3. Download Look into Person (LIP) dataset and create symbolic links to the following directories
    ln -s PATH_TO_LIP_TRAIN_IMAGES_DIR dataset/lip/train_images   
    ln -s PATH_TO_LIP_VAL_IMAGES_DIR dataset/lip/val_images      
    ln -s PATH_TO_LIP_TEST_IMAGES_DIR dataset/lip/testing_images   
    ln -s PATH_TO_LIP_TRAIN_SEGMENTATION_ANNO_DIR dataset/lip/train_segmentations   
    ln -s PATH_TO_LIP_VAL_SEGMENTATION_ANNO_DIR dataset/lip/val_segmentations   
    

Usage

Training

Run the following command to train the model from scratch (Default: 5-stage Hourglass based network):

sh run_train.sh

or

CUDA_VISIBLE_DEVICES=0,1 python main.py -b 24 --lr 0.0015

A simple way to record the training log by adding the following command:

2>&1 | tee exps/logs/mula_lip.log

Some configurable parameters in training phase:

Testing

Run the following command to evaluate the model on LIP validation set:

sh run_test.sh

or

CUDA_VISIBLE_DEVICES=0 python main.py --evaluate True --calc-pck True --calc-miou True --resume exps/snapshots/mula_lip.pth.tar

Run the following command to evaluate the model on LIP testing set:

CUDA_VISIBLE_DEVICES=0 python main.py --evaluate True --resume exps/snapshots/mula_lip.pth.tar --eval-data dataset/lip/testing_images --eval-anno dataset/lip/jsons/LIP_SP_TEST_annotations.json

In particular, human pose estimation results will be saved as a .csv file followed the official evaluation format of LIP dataset for single-person human pose estimation. An example is provided in exps/preds/pose_results/pred_keypoints_lip.csv. Human parsing results will be saved as a set of .png images in the folder exps/preds/parsing_results, representing body part label maps for testing images.

Some configurable parameters in testing phase:

The models generated with this code can be downloaded here: GoogleDrive. Training logs can also be found at the same folder for the reference.

Citation

If you use our code in your work or find it is helpful, please cite the paper:

@inproceedings{nie2018mula,
  title={Mutual Learning to Adapt for Joint Human Parsing and Pose Estimation},
  author={Nie, Xuecheng and Feng, Jiashi and Yan, Shuicheng},
  booktitle={ECCV},
  year={2018}
}