Awesome
Pose2Seg
Official code for the paper "Pose2Seg: Detection Free Human Instance Segmentation"[ProjectPage][arXiv] @ CVPR2019.
The OCHuman dataset proposed in our paper is released here
<div align="center"> <img src="figures/pipeline.jpg" width="1000px"/> <p> Pipeline of our pose-based instance segmentation framework.</p> </div>Setup environment
pip install cython matplotlib tqdm opencv-python scipy pyyaml numpy
pip install torchvision torch
cd ~/github-public/cocoapi/PythonAPI/
python setup.py build_ext install
cd -
Download data
- COCO 2017
- OCHuman
Note:
person_keypoints_(train/val)2017_pose2seg.json
is a subset of person_keypoints_(train/val)2017.json
(in COCO2017 Train/Val annotations). We choose those instances with both keypoint and segmentation annotations for our experiments.
Setup data
The data
folder should be like this:
data
├── coco2017
│ ├── annotations
│ │ ├── person_keypoints_train2017_pose2seg.json
│ │ ├── person_keypoints_val2017_pose2seg.json
│ ├── train2017
│ │ ├── ####.jpg
│ ├── val2017
│ │ ├── ####.jpg
├── OCHuman
│ ├── annotations
│ │ ├── ochuman_coco_format_test_range_0.00_1.00.json
│ │ ├── ochuman_coco_format_val_range_0.00_1.00.json
│ ├── images
│ │ ├── ####.jpg
How to train
python train.py
Note: Currently we only support for single-gpu training.
How to test
This allows you to test the model on (1) COCOPersons val set and (2) OCHuman val & test set.
python test.py --weights last.pkl --coco --OCHuman
We retrained our model using this repo, and got similar results with our paper. The final weights can be download here.
About Human Pose Templates in COCO
<div align="center"> <img src="figures/pose_templates.png" width="500px"/> <p> Pose templates clustered using K-means on COCO.</p> </div>This repo already contains a template file modeling/templates.json
which was used in our paper. But you are free to explore different cluster parameters as discussed in our paper. See visualize_cluster.ipynb for an example.