Home

Awesome

Diffusion Models: A Comprehensive Survey of Methods and Applications

This repo is constructed for collecting and categorizing papers about diffusion models according to our survey paper——Diffusion Models: A Comprehensive Survey of Methods and Applications, which has been accepted by the journal ACM Computing Surveys. Considering the fast development of this field, we will continue to update both arxiv paper and this repo.

Overview

<div aligncenter><img width="900" alt="image" src="https://user-images.githubusercontent.com/62683396/227244860-3608bf02-b2af-4c00-8e87-6221a59a4c42.png">

Catalogue

Algorithm Taxonomy

Sampling-Acceleration Enhancement

Likelihood-Maximization Enhancement

Data with Special Structures

Diffusion with (Multimodal) LLM

Diffusion with DPO/RLHF

Application Taxonomy

Connections with Other Generative Models

<p id="1"></p >

Algorithm Taxonomy

<p id="1.1"></p >

1. Efficient Sampling

<p id="1.1.1"></p >

1.1 Learning-Free Sampling

<p id="1.1.1.1"></p >
1.1.1 SDE Solver

Score-Based Generative Modeling through Stochastic Differential Equations

Adversarial score matching and improved sampling for image generation

Come-closer-diffuse-faster: Accelerating conditional diffusion models for inverse problems through stochastic contraction

Score-Based Generative Modeling with Critically-Damped Langevin Diffusion

Gotta Go Fast When Generating Data with Score-Based Models

Elucidating the Design Space of Diffusion-Based Generative Models

Generative modeling by estimating gradients of the data distribution

Structure-Guided Adversarial Training of Diffusion Models

<p id="1.1.1.2"></p >
1.1.2 ODE Solver

Denoising Diffusion Implicit Models

Improving Diffusion-Based Image Synthesis with Context Prediction

gDDIM: Generalized denoising diffusion implicit models

Elucidating the Design Space of Diffusion-Based Generative Models

DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Step

Pseudo Numerical Methods for Diffusion Models on Manifolds

Fast Sampling of Diffusion Models with Exponential Integrator

Poisson flow generative models

Improving Diffusion-Based Image Synthesis with Context Prediction

Cross-Modal Contextualized Diffusion Models for Text-Guided Visual Generation and Editing

Structure-Guided Adversarial Training of Diffusion Models

Consistency Flow Matching: Defining Straight Flows with Velocity Consistency

<p id="1.1.2"></p >

1.2 Learning-Based Sampling

<p id="1.1.2.1"></p >
1.2.1 Optimized Discretization

Learning to Efficiently Sample from Diffusion Probabilistic Models

GENIE: Higher-Order Denoising Diffusion Solvers

Learning fast samplers for diffusion models by differentiating through sample quality

<p id="1.1.2.2"></p >
1.2.2 Knowledge Distillation

Progressive Distillation for Fast Sampling of Diffusion Models

Knowledge Distillation in Iterative Generative Models for Improved Sampling Speed

<p id="1.1.2.3"></p >
1.2.3 Truncated Diffusion

Accelerating Diffusion Models via Early Stop of the Diffusion Process

Truncated Diffusion Probabilistic Models

<p id="1.2"></p >

2. Improved Likelihood

<p id="1.2.1"></p >

2.1. Noise Schedule Optimization

Cross-Modal Contextualized Diffusion Models for Text-Guided Visual Generation and Editing

Improved denoising diffusion probabilistic models

Variational diffusion models

<p id="1.2.2"></p >

2.2. Reverse Variance Learning

Analytic-DPM: an Analytic Estimate of the Optimal Reverse Variance in Diffusion Probabilistic Models

Improved denoising diffusion probabilistic models

Stable Target Field for Reduced Variance Score Estimation in Diffusion Models

<p id="1.2.3"></p >

2.3. Exact Likelihood Computation

Structure-Guided Adversarial Training of Diffusion Models

Score-Based Generative Modeling through Stochastic Differential Equations

Maximum likelihood training of score-based diffusion models

A variational perspective on diffusion-based generative models and score matching

Score-Based Generative Modeling through Stochastic Differential Equations

Maximum Likelihood Training for Score-based Diffusion ODEs by High Order Denoising Score Matching

Maximum Likelihood Training of Implicit Nonlinear Diffusion Models

Improving Diffusion-Based Image Synthesis with Context Prediction

<p id="1.3"></p >

3. Data with Special Structures

<p id="1.3.1"></p >

3.1. Data with Manifold Structures

<p id="1.3.1.1"></p >
3.1.1 Known Manifolds

Riemannian Score-Based Generative Modeling

Riemannian Diffusion Models

<p id="1.3.1.2"></p >
3.1.2 Learned Manifolds

Score-based generative modeling in latent space

Diffusion priors in variational autoencoders

Hierarchical text-conditional image generation with clip latents

High-resolution image synthesis with latent diffusion models

Improving Diffusion-Based Image Synthesis with Context Prediction

<p id="1.3.2"></p >

3.2. Data with Invariant Structures

GeoDiff: A Geometric Diffusion Model for Molecular Conformation Generation

Permutation invariant graph generation via score-based generative modeling

Score-based Generative Modeling of Graphs via the System of Stochastic Differential Equations

DiGress: Discrete Denoising diffusion for graph generation

Learning gradient fields for molecular conformation generation

Graphgdp: Generative diffusion processes for permutation invariant graph generation

SwinGNN: Rethinking Permutation Invariance in Diffusion Models for Graph Generation

Protein-Ligand Interaction Prior for Binding-aware 3D Molecule Diffusion Models

Graphusion: Latent Diffusion for Graph Generation

<p id="1.3.3"></p >

3.3 Discrete Data

Vector quantized diffusion model for text-to-image synthesis

Structured Denoising Diffusion Models in Discrete State-Spaces

Vector Quantized Diffusion Model with CodeUnet for Text-to-Sign Pose Sequences Generation

Deep Unsupervised Learning using Non equilibrium Thermodynamics.

A Continuous Time Framework for Discrete Denoising Models

<p id="1.4"></p >

4. Diffusion with (Multimodal) LLM

<p id="1.4.1"></p >

4.1. Simple Combination

LLM-grounded Diffusion: Enhancing Prompt Understanding of Text-to-Image Diffusion Models with Large Language Models

Videodirectorgpt: Consistent multi-scene video generation via llm-guided planning

RealCompo: Dynamic Equilibrium between Realism and Compositionality Improves Text-to-Image Diffusion Models

<p id="1.4.2"></p >

4.2. Deep Collaboration

Mastering Text-to-Image Diffusion: Recaptioning, Planning, and Generating with Multimodal LLMs

VideoTetris: Towards Compositional Text-To-Video Generation

<p id="1.5"></p >

4. Diffusion with DPO/RLHF

Diffusion Model Alignment Using Direct Preference Optimization

ImageReward: Learning and Evaluating Human Preferences for Text-to-Image Generation

IterComp: Iterative Composition-Aware Feedback Learning from Model Gallery for Text-to-Image Generation

<p id="2"></p>

Application Taxonomy

<p id="2.1"></p>

1. Computer Vision

<p id="2.1.1"></p > <p id="2.1.2"></p > <p id="2.1.3"></p > <p id="2.1.4"></p > <p id="2.1.5"></p > <p id="2.1.6"></p > <p id="2.2"></p>

2. Natural Language Processing

<p id="2.3"></p>

3. Temporal Data Modeling

<p id="2.3.1"></p > <p id="2.3.2"></p > <p id="2.3.3"></p > <p id="2.4"></p>

4. Multi-Modal Learning

<p id="2.4.1"></p > <p id="2.4.2"></p > <p id="2.4.3"></p > <p id="2.4.4"></p > <p id="2.4.5"></p > <p id="2.4.6"></p > <p id="2.5"></p>

5. Robust Learning

<p id="2.5.1"></p > <p id="2.5.2"></p > <p id="2.6"></p>

6. Molecular Graph Modeling

<p id="2.7"></p>

7. Material Design

<p id="2.8"></p>

8. Medical Image Reconstruction

<p id="3"></p>

Connections with Other Generative Models

<p id="3.1"></p>

1. Variational Autoencoder

<p id="3.2"></p>

2. Generative Adversarial Network

<p id="3.3"></p>

3. Normalizing Flow

<p id="3.4"></p>

4. Autoregressive Models

<p id="3.5"></p>

5. Energy-Based Models

Citing

If you find this work useful, please cite our paper:

@article{yang2023diffusurvey,
  title={Diffusion models: A comprehensive survey of methods and applications},
  author={Yang, Ling and Zhang, Zhilong and Song, Yang and Hong, Shenda and Xu, Runsheng and Zhao, Yue and Zhang, Wentao and Cui, Bin and Yang, Ming-Hsuan},
  journal={ACM Computing Surveys},
  volume={56},
  number={4},
  pages={1--39},
  year={2023},
  publisher={ACM New York, NY, USA}
}