Awesome
Inner Space Preserving - Generative Pose Machine (ISP-GPM)
This is the code for the following paper:
S. Liu, S. Ostadabbas, “Inner Space Preserving Generative Pose Machine,” accepted for publication in the European Conference on Computer Vision (ECCV’18), September 8-14, 2018, Munich, Germany.
Check the project page for more materials.
Contact: Shuangjun Liu,
Contents
- 1. Requirement
- 2. Download SURREAL dataset and Index files
- 3. Path settings
- 4. Training
- 5. Testing
- 6. Evaluation
- Citation
- License
- Acknowledgements
1. Requirement
- Install Torch with cuDNN support.
- Install matio by
luarocks install matio
- Install OpenCV-Torch by
luarocks install cv
- Install display server by
luarocks install https://raw.githubusercontent.com/szym/display/master/display-scm-0.rockspec
- Download SURREAL
- Download valid Index
Tested on Unbuntu 16.04 with cuda v8 and cudNN v5.1. .
2. Download SURREAL dataset and Index files
There are 4 major data files in each sequence in SURREAL
<sequenceName>_c%04d.mp4
<sequenceName>_c%04d_depth.mat
<sequenceName>_c%04d_segm.mat
<sequenceName>_c%04d_info.mat
For our training purpose, you only need to download the <sequenceName>_c%04d.mp4
mp4 files and
<sequenceName>_c%04d_info.mat
files
We notice that some sequences only partially contain the human body or even not at all in original surreal which is not sutiable for our reposing purpose. Our logic is simple, if we don't even observe the human body, we can by no means repose it. So our strategy is direct, only keep the frame number with fully observed human. Please download valid frame index generated by us and copy them (test, train, val volder) directly to the original surreal folder structure under cmu/..
3. Path settings
Dataset path
Set the dataset root.
Set the datasetname.
In our case, <dataseRoot>/cmu
Experiment path
Set your experiment root by -logRoot
Set your experiment name by -dirName
All models and generated results will be saved here
Sample path
We provide a few samples from different domains including real human, paitings and sculptures located in folder samples.
You can provide your own source location by setting -genFd
option
4. Training
We provide an example of training a model with cGAN configuration by
th main.lua -dirName <your_experiment_id> -cGAN
For customized settings, please set opts.lua accordingly or pass in by command lines.
5. Testing
You can repose the provided samples by
th main.lua -epochNumber <yourEpochNum + 1> -flgGenFd -dirName <your_model_nm>
You can download our pretrained model with 2 layer discriminator GPM_MP_D2 with 50 epoches In this case: 'th main.lua -epochNumber 51 -flgGenFd -dirName GPM_MP_D2'
6. Evaluation
We provide the inner space preserving evaluation of first 100 images of SURREAL in validation set:
th main.lua -dirName GPM7 -epochNumber 51 -ifTsRMS
For human pose estimation, please download the stacked-hour-glass. Then evaluate the pose estimation result on reposed humans against the ground truth that employed during reposing.
Citation
If you find our work useful in your research please consider citing our paper:
@INPROCEEDINGS{sjliu2018ISPGPM,
title = {Inner Space Preserving Generative Pose Machine},
author = {Liu, Shuangjun and Ostadabbas, Sarah},
booktitle = {ECCV},
year = {2018}
}
License
- This code is for non-commertial purpose only. For other uses please contact ACLab of NEU.
- No maintainence survice
Acknowledgements
The training sesssion depends on SURREAL dataset written by Gul Varol.
The Conditional GAN descriminator comes from original work of image-to-image translation with conditional adversarial nets by Phillip Isola