Home

Awesome

Awesome-DragGAN 🐉

Awesome Awesome DragGAN

DragGAN has been one of the most popular generative image editing model these days. It provide a brand new way to edit the image by interatively selecting target and source points on the image, giving the greater flexibility to users than existing text-based editing. Though constrainted to generative image manifold currently, the idea of DragGAN should inspired and have inspired a varity of following works.

Awesome-DragGAN is a curated list of the papers, repositories, tutorials, and anythings related to the DragGAN.

Contributions are welcome!

Starting Point Star

Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold </br> Xingang Pan, Ayush Tewari, Thomas Leimkühler, Lingjie Liu, Abhimitra Meka, Christian Theobalt </br> [Code] [Project Page] [Official Implementation]

Papers

Star </br> The Blessing of Randomness: SDE Beats ODE in General Diffusion-based Image Editing </br> Shen Nie, Hanzhong Allan Guo, Cheng Lu, Yuhao Zhou, Chenyu Zheng, Chongxuan Li </br> [Project Page] [Code] </br> Nov 2 2023

Star </br> FreeDrag: Point Tracking is Not You Need for Interactive Point-based Image Editing </br> Pengyang Ling*, Lin Chen*, Pan Zhang, Huaian Chen, Yi Jin </br> [Project Page] [Code] </br> July 10 2023

Star </br> DragonDiffusion: Enabling Drag-style Manipulation on Diffusion Models </br> Chong Mou, Xintao Wang, Jiechong Song, Ying Shan, Jian Zhang </br> [Code] </br> July 5 2023

Star </br> DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing </br> Yujun Shi, Chuhui Xue, Jiachun Pan, Wenqing Zhang, Vincent Y. F. Tan, Song Bai </br> [Project Page] [Code] </br> June 26 2023

Repositories

Tutorials

Pretrained GAN Models