Awesome
Not All Steps are Created Equal: Selective Diffusion Distillation for Image Manipulation (ICCV 2023)
This is the official implementation of SDD (ICCV 2023).
Conventional diffusion editing pipeline faces a trade-off problem: adding too much noise affects the fidelity of the image while adding too little affects its editability. In this paper, we propose a novel framework, Selective Diffusion Distillation (SDD), that ensures both the fidelity and editability of images. Instead of directly editing images with a diffusion model, we train a feedforward image manipulation network under the guidance of the diffusion model. Besides, we propose an effective indicator to select the semantic-related timestep to obtain the correct semantic guidance from the diffusion model. This approach successfully avoids the dilemma caused by the diffusion process.
<p align="center"> <img src="docs/SDD.png" width="100%"> </p>For more details, please refer to:
Not All Steps are Created Equal: Selective Diffusion Distillation for Image Manipulation [Paper] <br /> Luozhou Wang*, Shuai Yang*, Shu Liu, Yingcong Chen
Installation
- Create an environment with python==3.8.0
conda create -n sdd python==3.8.0
- Activate it
conda activate sdd
- Install basic requirements
pip install -r requirements.txt
Getting Started
Preparation
-
Prepare data and pretrain checkpoints.
Data: CelebA latent code (train), CelebA latent code (test)
Pretrain stylegan2: stylegan2-ffhq
Facenet for IDLoss: facenet
-
Prepare your token from Huggingface. Please place your token at
./TOKEN
.
Infer with pretrain SDD checkpoint (white hair)
-
Download pretrain SDD checkpoint white hair. Please place it at
./pretrain/white_hair.pt
. -
Run inference.
python inference.py --config ./configs/white_hair.yml --work_dir work_dirs/white_hair/
Train your own SDD
- Prepare your yaml file.
- Train SDD.
python train.py --config [YOUR YAML] --work_dir [YOUR WORK DIR]
Search with HQS
- Prepare your yaml file.
- Search with HQS
python search.py --config [YOUR YAML] --work_dir [YOUR WORK DIR]
Citation
If you find this project useful in your research, please consider citing:
@misc{wang2023steps,
title={Not All Steps are Created Equal: Selective Diffusion Distillation for Image Manipulation},
author={Luozhou Wang and Shuai Yang and Shu Liu and Ying-cong Chen},
year={2023},
eprint={2307.08448},
archivePrefix={arXiv},
primaryClass={cs.CV}
}