Home

Awesome

Self Correction for Human Parsing

Python 3.6 License: MIT

An out-of-box human parsing representation extractor.

Our solution ranks 1st for all human parsing tracks (including single, multiple and video) in the third LIP challenge!

lip-visualization

Features:

Requirements

conda env create -f environment.yaml
conda activate schp
pip install -r requirements.txt

Simple Out-of-Box Extractor

IntegratedPIFu uses this extractor to get the pseudo groundtruth human parsing maps.

Pascal-Person-Part (exp-schp-201908270938-pascal-person-part.pth)

Model Preparation

Make a checkpoints directory: mkdir checkpoints

Then, download the trained model (trained on the pascal dataset) from https://drive.google.com/uc?id=1E5YwNKW2VOEayK9mWCS3Kpsxf-3z04ZE

Put the downloaded file (exp-schp-201908270938-pascal-person-part.pth) into the checkpoints folder.

Running to generate human parsing maps for IntegratedPIFu

python simple_extractor.py --integratedpifu-dir [PATH_TO_IntegratedPIFu Folder]

Citation

Please cite our work if you find this repo useful in your research.

@article{li2020self,
  title={Self-Correction for Human Parsing}, 
  author={Li, Peike and Xu, Yunqiu and Wei, Yunchao and Yang, Yi},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, 
  year={2020},
  doi={10.1109/TPAMI.2020.3048039}}

Visualization

Related

Our code adopts the InplaceSyncBN to save gpu memory cost.

There is also a PaddlePaddle Implementation of this project.