Awesome
Open-Edit: Open-Domain Image Manipulation with Open-Vocabulary Instructions
Xihui Liu, Zhe Lin, Jianming Zhang, Handong Zhao, Quan Tran, Xiaogang Wang, and Hongsheng Li.<br> Published in ECCV 2020.
Paper | 1-minute video | Slides
Installation
Clone this repo.
git clone https://github.com/xh-liu/Open-Edit
cd Open-Edit
Install PyTorch 1.1+ and other requirements.
pip install -r requirements.txt
Download pretrained models
Download pretrained models from Google Drive
Data preparation
We use Conceptual Captions dataset for training. Download the dataset and put it under the dataset folder. You can also use other datasets
Training
The visual-semantic embedding model is trained with VSE++.
The image decoder is trained with:
bash train.sh
Testing
You can specify the image path and text instructions in test.sh.
bash test.sh
Citation
If you use this code for your research, please cite our papers.
@inproceedings{liu2020open,
title={Open-Edit: Open-Domain Image Manipulation with Open-Vocabulary Instructions},
author={Liu, Xihui and Lin, Zhe and Zhang, Jianming and Zhao, Handong and Tran, Quan and Wang, Xiaogang and Li, Hongsheng},
booktitle={European Conference on Computer Vision},
year={2020}
}