Awesome
DynaST
This is the pytorch implementation of the following ECCV 2022 paper:
DynaST: Dynamic Sparse Transformer for Exemplar-Guided Image Generation
Songhua Liu, Jingwen Ye, Sucheng Ren, and Xinchao Wang.
<img src="https://github.com/Huage001/DynaST/blob/main/teaser.jpg" width="500px"/>Installation
git clone https://github.com/Huage001/DynaST.git
cd DynaST
conda create -n DynaST python=3.6
conda activate DynaST
pip install -r requirements.txt
Inference
-
Prepare DeepFashion dataset following the instruction of CoCosNet.
-
Create a directory for checkpoints if there is not:
mkdir -p checkpoints/deepfashion/
-
Download pre-trained model from here and move the file to the directory 'checkpoints/deepfashion/'.
-
Edit the file 'test_deepfashion.sh' and set the argument 'dataroot' to the root of the DeepFashion dataset.
-
Run:
bash test_deepfashion.sh
-
Check the results in the directory 'checkpoints/deepfashion/test/'.
Training
-
Create a directory for the pre-trained VGG model if there is not:
mkdir vgg
-
Download pre-trained VGG model used for loss computation from here and move the file to the directory 'vgg'.
-
Edit the file 'train_deepfashion.sh' and set the argument 'dataroot' to the root of the DeepFashion dataset.
-
Run:
bash train_deepfashion.sh
-
Checkpoints and intermediate results are saved in the directory 'checkpoints/deepfashion/'.
Citation
If you find this project useful in your research, please consider cite:
@Article{liu2022dynast,
author = {Songhua Liu, Jingwen Ye, Sucheng Ren, Xinchao Wang},
title = {DynaST: Dynamic Sparse Transformer for Exemplar-Guided Image Generation},
journal = {European Conference on Computer Vision},
year = {2022},
}
Acknowledgement
This code borrows heavily from CoCosNet. We also thank the implementation of Synchronized Batch Normalization.