Home

Awesome

<div align="center"> <h1>Diffusion GLA (DiG) </h1> <h3>Scalable and Efficient Diffusion Models with Gated Linear Attention</h3>

Lianghui Zhu<sup>1,2</sup>,Zilong Huang<sup>2 :email:</sup>,Bencheng Liao<sup>1</sup>,Jun Hao Liew<sup>2</sup>, Hanshu Yan<sup>2</sup>, Jiashi Feng<sup>2</sup>, Xinggang Wang<sup>1 :email:</sup>

<sup>1</sup> School of EIC, Huazhong University of Science and Technology, <sup>2</sup> ByteDance

(<sup>:email:</sup>) corresponding author.

ArXiv Preprint (arXiv 2405.18428)

</div>

News

Abstract

Diffusion models with large-scale pre-training have achieved significant success in the field of visual content generation, particularly exemplified by Diffusion Transformers (DiT). However, DiT models have faced challenges with scalability and quadratic complexity efficiency. In this paper, we aim to leverage the long sequence modeling capability of Gated Linear Attention (GLA) Transformers, expanding its applicability to diffusion models. We introduce Diffusion Gated Linear Attention Transformers (DiG), a simple, adoptable solution with minimal parameter overhead, following the DiT design, but offering superior efficiency and effectiveness. In addition to better performance than DiT, DiG-S/2 exhibits $2.5\times$ higher training speed than DiT-S/2 and saves $75.7%$ GPU memory at a resolution of $1792 \times 1792$. Moreover, we analyze the scalability of DiG across a variety of computational complexity. DiG models, with increased depth/width or augmentation of input tokens, consistently exhibit decreasing FID. We further compare DiG with other subquadratic-time diffusion models. With the same model size, DiG-XL/2 is $4.2\times$ faster than the recent Mamba-based diffusion model at a $1024$ resolution, and is $1.8\times$ faster than DiT with CUDA-optimized FlashAttention-2 under the $2048$ resolution. All these results demonstrate its superior efficiency among the latest diffusion models.

<div align="center"> <img src="assets/dig_teaser_v1.4.png" /> </div> <div align="center"> <img src="assets/scaling_err_v1.1.png" /> </div>

Overview

<div align="center"> <img src="assets/dig_pipeline_v1.9.png" /> </div>

Envs. for Training

Train Your DiG

Acknowledgement :heart:

This project is based on GLA (paper, code), flash-linear-attention (code), DiT (paper, code), DiS (paper, code), OpenDiT (code). Thanks for their wonderful works.

Citation

If you find DiG is useful in your research or applications, please consider giving us a star 🌟 and citing it by the following BibTeX entry.

@article{dig,
      title={DiG: Scalable and Efficient Diffusion Models with Gated Linear Attention}, 
      author={Lianghui Zhu and Zilong Huang and Bencheng Liao and Jun Hao Liew and Hanshu Yan and Jiashi Feng and Xinggang Wang},
      year={2024},
      eprint={2405.18428},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}