Home

Awesome

License CC BY-NC-SA 4.0 Python 3.6

DeFLOCNet: Deep Image Editing via Flexible Low level Controls (CVPR2021) .The official pytorch code.

Paper | BibTex

Hongyu Liu, Ziyu Wan, Wei Huang, Yibing Song, Xintong Han , Jing Liao, Bin Jiang, Wei Liu.<br>

edit_gif DeFLOCNet Show

DeFLOCNet Show

Installation

Clone this repo.

git clone https://github.com/KumapowerLIU/DeFLOCNet.git

Prerequisites

Demo

Please try our GUI demo!

You need download the pre-trained model to the checkpoints file, you need put the pre-trained model for Places2 to the checkpoints/nature and the pre-trained model for CelebA to the checkpoints/face. Then you can run the code: demo.py to edit the images. We give some example images in the folder face_sample and nature_sample respectively. Please see the gif to tell you how to use our GUI!

git clone https://github.com/KumapowerLIU/DeFLOCNet.git

Dataset Preparation

Original images: We use Places2, CelebA datasets. To train a model on the full dataset, download datasets from official websites.

Mask for original image: We use the irregular mask dataset Liu et al for the original image (not the color image ). You can download the publically available Irregular Mask Dataset from their website.

Color images for Places2: we use the RTV smooth method to extract the color for Places2. Run generation function data/matlab/generate_structre_images.m in your matlab. For example, if you want to generate smooth images for Places2, you can run the following code:

generate_structure_images("path to Places2 dataset root", "path to output folder");

Color images for face: We follow the SC-FEGAN to generate the color map for the face by using the median color of segmented areas. Sketch images: We follow the SC-FEGAN to predict the edge by using the HED edge detector.

Code Structure

Pre-trained weights

There are two folders to present a pre-trained model for two datasets, respectively. How to use these pre-train models? Please see the Demo!

TODO

<span id="jump2"></span>

Citation

If you use this code for your research, please cite our papers.

@inproceedings{Liu2021DeFLOCNet},
  title={DeFLOCNet: Deep Image Editing via Flexible Low level Controls},
  author={Hongyu Liu, Ziyu Wan, Wei Huang, Yibing Song, Xintong Han, Jing Liao, Bing Jiang and Wei Liu},
  booktitle={CVPR},
  year={2021}
}