Awesome
:rocket: Diffusion models are making the headlines as a new generation of powerful generative models.
However, many of the ongoing research considers solutions that are quite often quite specific and require large computational resources for training.
:beginner: DiffusionFastForward offers a general template for diffusion models for images that can be a starting point for understanding and researching diffusion-based generative models.
- :zap: PyTorch Lightning to enable easy training!
- :money_with_wings: You can run all experiments online on Google colab - no need for own GPU machine!
- :mag_right: Examples for both low-resolution and high-resolution data!
- :tent: Examples of latent diffusion!
- :art: Examples of image translation with diffusion!
The code structure is simple, so that you can easily customize it to your own applications.
:construction: Disclaimer: This repository does not provide any weights to the models. The purpose of this software is to be able to train new weights on a previously unexplored type of data.
Contents
There are three elements integrated into this project:
- :computer: Code
- :bulb: Notes (in
notes
directory) - :tv: Video Course (to be released on YouTube)
:computer: Code
This repository offers a starting point for training diffusion models on new types of data. It can serve as a baseline that can hopefully be developed into more robust solutions based on the specific features of the performed generative task.
It includes notebooks that can be run stand-alone:
- 01-Diffusion-Sandbox - visualizations of the diffusion process
- 02-Pixel-Diffusion - basic diffusion suitable for low-resolution data
- 03-Conditional-Pixel-Diffusion - image translation with diffusion for low-resolution data
- 04-Latent-Diffusion - latent diffusion suitable for high-resolution data
- 05-Conditional-Latent-Diffusion - image translation with latent diffusion
Dependencies
Assuming torch
and torchvision
is installed:
pip install pytorch-lightning==1.9.3 diffusers einops
:bulb: Notes
Short summary notes are released as part of this repository and they overlap semantically with the notebooks!
- 01-Diffusion-Theory - visualizations of the diffusion process
- 02-Pixel-Diffusion - basic diffusion suitable for low-resolution data
- 03-Conditional-Pixel-Diffusion - image translation with diffusion for low-resolution data
- 04-Latent-Diffusion - latent diffusion suitable for high-resolution data
- 05-Conditional-Latent-Diffusion - image translation with latent diffusion
:tv: Video Course (released on YouTube)
The course is released on YouTube and provides an extension to this repository. Some additional topics are covered, such as seminal papers and on-going research work.
The current plan for the video course (links added upon publishing):
- :tv: #00 Introduction
- :tv: #01 Basics: Denoising Diffusion Process
- :tv: #02 Basics: Denoising Diffusion of Images
- :tv: #03 Practical: Unconditional Diffusion in Low-Resolution
- :soon: #04 Extra: Summary of Seminal Works
- :tv: #05 Basics: Conditional Diffusion
- :tv: #06 Practical: Condition Diffusion in Low-Resolution
- :tv: #07 Basics: High-Resolution Diffusion
- :tv: #08 Practical: High-Resolution Diffusion
- :soon: #09 Extra: Diffusion Applications
- :soon: #10 Extra: Further Insight into Diffusion
:moneybag: Training Cost
Most examples are one of two types of models, trainable within a day:
PixelDiffusion (Good for small images :baby:) Appropriate for LR data. Direct diffusion in pixel space.
Image Resolution | 64x64 |
---|---|
Training Time | ~10 hrs |
Memory Usage | ~4 GB |
LatentDiffusion (Good for large images :whale2:) Useful for HR data. Latent diffusion in compressed space.
Image Resolution | 256x256 |
---|---|
Training Time | ~20 hrs |
Memory Usage | ~5 GB |
Other Software Resources
There are many great projects focused on diffusion generative models. However, most of them involve somewhat complex frameworks that are not always suitable for learning and preliminary experimentation.
- 🤗 diffusers
- lucidrains PyTorch DDPM
- OpenAI guided-diffusion
- OpenAI improved-diffusion
- CompVis latent-diffusion
- Meta DiT
- MONAI GenerativeModels for Medical Imaging
Other Educational Resources
Some excellent materials have already been published on the topic! Huge respect to all of the creators :pray: - check them out if their work has helped you!
:coffee: Blog Posts
- Score-based Perspective by Yang Song
- What are Diffusion Models? by Lilian Weng
- Annotated Diffusion Model by Niels Rogge and Kashif Rasul
- Diffusion as a kind of VAE by Angus Turner
:crystal_ball: Explanation Videos
- Diffusion Model Math Explained by Outlier
- What are Diffusion Models? by Ari Seff
- Diffusion Models Beat GANs on Image Synthesis the research paper explained by Yannic Kilcher
:wrench: Implementation Videos
- Diffusion Models PyTorch Implementation by Outlier
- High-Resolution Image Synthesis with LDMs | ML Coding Series by Aleksa Gordić
:mortar_board: Video Lectures/Tutorials
- Diffusion Probabilistic Models - MIT 6.S192 lecture by Jascha Sohl-Dickstein
- Generative art using diffusion - MIT 6.S192 lecture by Prafulla Dhariwal
- Learning to Generate Data by Estimating Gradients of the Data Distribution by Yang Song
- Denoising Diffusion-based Generative Modeling: Foundations and Applications tutorial presented at CVPR2022 by Karsten Kreis, Ruiqi Gao and Arash Vahdat
- Generative Modeling by Estimating Gradients of the Data Distribution by Stefano Ermon
- Variational autoencoders and Diffusion Models by Tim Salimans