Awesome
PISE
The code for our CVPR paper PISE: Person Image Synthesis and Editing with Decoupled GAN, Project Page, supp.
Requirement
conda create -n pise python=3.6
conda install pytorch=1.2 cudatoolkit=10.0 torchvision
pip install scikit-image pillow pandas tqdm dominate natsort
Data
Data preparation for images and keypoints can follow Pose Transfer and GFLA.
-
Download deep fashion dataset. You will need to ask a password from dataset maintainers. Unzip 'Img/img.zip' and put the folder named 'img' in the './fashion_data' directory.
-
Download train/test key points annotations and the dataset list from Google Drive, including fashion-pairs-train.csv, fashion-pairs-test.csv, fashion-annotation-train.csv, fashion-annotation-train.csv, train.lst, test.lst. Put these files under the
./fashion_data
directory. -
Run the following code to split the train/test dataset.
python data/generate_fashion_datasets.py
-
Download parsing data, and put these files under the
./fashion_data
directory. Parsing data for testing can be found from baidu (fectch code: abcd) or Google drive. Parsing data for training can be found from baidu (fectch code: abcd) or Google drive. You can get the data follow with PGN, and re-organize the labels as you need.
Train
python train.py --name=fashion --model=painet --gpu_ids=0
Note that if you want to train a pose transfer model as well as texture transfer and region editing, just comments the line 177 and 178, and uncomments line 162-176.
For training using multi-gpus, you can refer to issue in GFLA
Test
You can directly download our test results from baidu (fetch code: abcd) or Google drive. <br> Pre-trained checkpoint of human pose transfer reported in our paper can be found from baidu (fetch code: abcd) or Google drive and put it in the folder (-->results-->fashion).
Pre-Trained checkpoint of texture transfe, region editing, style interpolation used in our paper can be found from baidu(fetch code: abcd) or Google drive. Note that the model need to be changed.
Test by yourself <br>
python test.py --name=fashion --model=painet --gpu_ids=0
Citation
If you use this code, please cite our paper.
@InProceedings{Zhang_2021_CVPR,
author = {Zhang, Jinsong and Li, Kun and Lai, Yu-Kun and Yang, Jingyu},
title = {{PISE}: Person Image Synthesis and Editing With Decoupled GAN},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2021},
pages = {7982-7990}
}
Acknowledgments
Our code is based on GFLA.