Awesome
Install
Datasets
Train with facades dataset (mode: B2A)
CUDA_VISIBLE_DEVICES=x python main_pix2pixgan.py --dataset pix2pix --dataroot /path/to/facades/train --valDataroot /path/to/facades/val --mode B2A --exp ./facades --display 5 --evalIter 500
- Resulting model is saved in ./facades directory named like net[D|G]_epoch_xx.pth
Train with edges2shoes dataset (mode: A2B)
CUDA_VISIBLE_DEVICES=x python main_pix2pixgan.py --dataset pix2pix --dataroot /path/to/edges2shoes/train --valDataroot /path/to/edges2shoes/val --mode A2B --exp ./edges2shoes --batchSize 4 --display 5
Results
- Randomly selected input samples
- Corresponding real target samples
- Corresponding generated samples
Note
- We modified pytorch.vision.folder and transform.py as to follow the format of train images in the datasets
- Most of the parameters are the same as the paper.
- You can easily reproduce results of the paper with other dataets
- Try B2A or A2B translation as your need
Reference