Awesome
DragonDiffusion + DiffEditor
Chong Mou, Xintao Wang, Jiechong Song, Ying Shan, Jian Zhang
🚩 New Features/Updates
- [2024/02/26] DiffEditor is accepted by CVPR 2024.
- [2024/02/05] Releasing the paper of DiffEditor.
- [2024/02/04] Releasing the code of DragonDiffusion and DiffEditor.
- [2024/01/15] DragonDiffusion is accepted by ICLR 2024 (Spotlight).
- [2023/07/06] Paper of DragonDiffusion is available here.
Introduction
DragonDiffusion is a turning-free method for fine-grained image editing. The core idea of DragonDiffusion comes from score-based diffusion. It can perform various editing tasks, including object moving, object resizing, object appearance replacement, content dragging, and object pasting. DiffEditor further improves the editing accuracy and flexibility of DragonDiffusion.
🔥🔥🔥 Main Features
Appearance Modulation
Appearance Modulation can change the appearance of an object in an image. The final appearance can be specified by a reference image.
<p align="center"> <img src="https://huggingface.co/Adapter/DragonDiffusion/resolve/main/asserts/appearance.PNG" height=240> </p>Object Moving & Resizing
Object Moving can move an object in the image to a specified location.
<p align="center"> <img src="https://huggingface.co/Adapter/DragonDiffusion/resolve/main/asserts/move.PNG" height=220> </p>Face Modulation
Face Modulation can transform the outline of one face into the outline of another reference face.
<p align="center"> <img src="https://huggingface.co/Adapter/DragonDiffusion/resolve/main/asserts/face.PNG" height=250> </p>Content Dragging
Content Dragging can perform image editing through point-to-point dragging.
<p align="center"> <img src="https://huggingface.co/Adapter/DragonDiffusion/resolve/main/asserts/drag.PNG" height=230> </p>Object Pasting
Object Pasting can paste a given object onto a background image.
<p align="center"> <img src="https://huggingface.co/Adapter/DragonDiffusion/resolve/main/asserts/paste.PNG" height=250> </p>🔧 Dependencies and Installation
- Python >= 3.8 (Recommend to use Anaconda or Miniconda)
- PyTorch >= 2.0.1
pip install -r requirements.txt
pip install dlib==19.14.0
⏬ Download Models
All models will be automatically downloaded. You can also choose to download manually from this url.
💻 How to Test
Inference requires at least 16GB
of GPU memory for editing a 768x768
image.
We provide a quick start on gradio demo.
python app.py
Related Works
[1] <a href="https://github.com/XingangPan/DragGAN">Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold</a>
</p> <p> [2] <a href="https://yujun-shi.github.io/projects/dragdiffusion.html">DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing</a> </p> <p> [3] <a href="https://arxiv.org/abs/2306.03881"> Emergent Correspondence from Image Diffusion</a></p> <p> [4] <a href="https://dave.ml/selfguidance/">Diffusion Self-Guidance for Controllable Image Generation</a> </p> <p> [5] <a href="https://browse.arxiv.org/abs/2308.06721">IP-Adapter: Text Compatible Image Prompt Adapter for Text-to-Image Diffusion Models</a> </p>🤗 Acknowledgements
We appreciate the foundational work done by score-based diffusion and DragGAN.
BibTeX
@article{mou2023dragondiffusion,
title={Dragondiffusion: Enabling drag-style manipulation on diffusion models},
author={Mou, Chong and Wang, Xintao and Song, Jiechong and Shan, Ying and Zhang, Jian},
journal={arXiv preprint arXiv:2307.02421},
year={2023}
}
@article{mou2023diffeditor,
title={DiffEditor: Boosting Accuracy and Flexibility on Diffusion-based Image Editing},
author={Mou, Chong and Wang, Xintao and Song, Jiechong and Shan, Ying and Zhang, Jian},
journal={arXiv preprint arXiv:2402.02583},
year={2023}
}