Home

Awesome

Reduce, Reuse, Recycle: Compositional Generation with Energy-Based Diffusion Models and MCMC

[<a href="https://energy-based-model.github.io/reduce-reuse-recycle/" target="_blank">Project Page</a>][<a href="https://colab.research.google.com/drive/1jvlzWMc6oo-TH1fYMl6hsOYfrcQj2rEs?usp=sharing" target="_blank">Colab</a>]

ezgif com-video-to-gif (1)

We provide a framework for probabilistically composing and repurposing diffusion models across different domains as described <a href="https://energy-based-model.github.io/reduce-reuse-recycle/" target="_blank">here</a>.

Since their introduction, diffusion models have quickly become the prevailing approach to generative modeling in many domains. They can be interpreted as learning the gradients of a time-varying sequence of log-probability density functions. This interpretation has motivated classifier-based and classifier-free guidance as methods for post-hoc control of diffusion models. In this work, we build upon these ideas using the score-based interpretation of diffusion models, and explore alternative ways to condition, modify, and reuse diffusion models for tasks involving compositional generation and guidance. In particular, we investigate why certain types of composition fail using current techniques and present a number of solutions. We conclude that the sampler (not the model) is responsible for this failure and propose new samplers, inspired by MCMC, which enable successful compositional generation. Further, we propose an energy-based parameterization of diffusion models which enables the use of new compositional operators and more sophisticated, Metropolis-corrected samplers. Intriguingly we find these samplers lead to notable improvements in compositional generation across a wide variety of problems such as classifier-guided ImageNet modeling and compositional text-to-image generation.

For more info see the project webpage.

Notebooks

We provide two separate notebooks to aid in implementing the results illustrated in the paper.

Training Code

Most of the larger-scale experiments done in the paper were done using the computational infrastructure at DeepMind and cannot be released. Below is a PyTorch reimplementation written by Bharat Runwal.

Energy Based Diffusion Model Training

A PyTorch implementation of Reduce, Reuse, Recycle: Compositional Generation with Energy-Based Diffusion Models and MCMC.


Installation

Run following to create a conda environment using the requirements file, and activate it:

conda create --name compose_ebm --file requirements.txt
conda activate compose_ebm

You can download a training dataset here

Training

To train the Energy Parameterized Classifier-Free Diffusion model you can run :

bash energy_train_ddp_64.sh

You can change the Energy score used for training here Line , currently we are using a denoising autoencoder inspired energy function .

Inference Sampling

anneal_samplers.py containts the implementation of variaous samplers (HMC, UHMC, ULA, MALA) which can be used with reverse diffusion sampling.

Note : In the current setting we use MCMC sampling for t > 50 as we saw in the last 50 steps the score functions doesn't change the image much. You can change this behaviour at this line Line . Example of running MALA Sampler with trained chkpt path (download the checkpoint from here or refer to the colab :

python inf_sample.py --sampler MALA --ckpt_path "ebm-49x1874.pt"