Home

Awesome

Be Yourself: Bounded Attention for Multi-Subject Text-to-Image Generation (ECCV 2024)

Omer Dahary, Or Patashnik, Kfir Aberman, Daniel Cohen-Or

Text-to-image diffusion models have an unprecedented ability to generate diverse and high-quality images. However, they often struggle to faithfully capture the intended semantics of complex input prompts that include multiple subjects. Recently, numerous layout-to-image extensions have been introduced to improve user control, aiming to localize subjects represented by specific tokens. Yet, these methods often produce semantically inaccurate images, especially when dealing with multiple semantically or visually similar subjects. In this work, we study and analyze the causes of these limitations. Our exploration reveals that the primary issue stems from inadvertent semantic leakage between subjects in the denoising process. This leakage is attributed to the diffusion model’s attention layers, which tend to blend the visual features of different subjects. To address these issues, we introduce Bounded Attention, a training-free method for bounding the information flow in the sampling process. Bounded Attention prevents detrimental leakage among subjects and enables guiding the generation to promote each subject's individuality, even with complex multi-subject conditioning. Through extensive experimentation, we demonstrate that our method empowers the generation of multiple subjects that better align with given prompts and layouts.

<a href="https://omer11a.github.io/bounded-attention/"><img src="https://img.shields.io/static/v1?label=Project&message=Website&color=red" height=20.5></a> <a href="https://arxiv.org/abs/2403.16990"><img src="https://img.shields.io/badge/arXiv-BA-b31b1b.svg" height=20.5></a> Hugging Face Spaces

<p align="center"> <img src="images/teaser.jpg" width="800px"/> </p>

Description

Official implementation of our "Be Yourself: Bounded Attention for Multi-Subject Text-to-Image Generation" paper.

Setup

Environment

To set up the environment, run:

conda create --name bounded-attention python=3.11.4
conda activate bounded-attention
pip install -r requirements.txt

Then, run in Python:

import nltk
nltk.download('averaged_perceptron_tagger')

Demo

This project has a gradio demo deployed in HuggingFace. To run the demo locally, run the following:

gradio app.py

Then, you can connect to the local demo by browsing to http://localhost:7860/.

Usage

<p align="center"> <img src="images/example.jpg" width="800px"/> <br> Example generations by SDXL with and without Bounded Attention. </p>

Basics

To generate images, you can run run_xl.py for our SDXL version, and run_sd.py for our Stable Diffusion version. In each script, we call the run function to generate the images. E.g.,

boxes = [
    [0.35, 0.4, 0.65, 0.9],
    [0, 0.6, 0.3, 0.9],
    [0.7, 0.55, 1, 0.85],
]

prompt = "3 D Pixar animation of a cute unicorn and a pink hedgehog and a nerdy owl traveling in a magical forest"
subject_token_indices = [[7, 8, 17], [11, 12, 17], [15, 16, 17]]

run(boxes, prompt, subject_token_indices, init_step_size=25, final_step_size=10)

The run function receives the following parameters:

Advanced options

The run function also supports the following optional hyperparameters:

Acknowledgements

This code was built using the code from the following repositories:

Citation

If you use this code for your research, please cite our paper:

@misc{dahary2024yourself,
    title={Be Yourself: Bounded Attention for Multi-Subject Text-to-Image Generation},
    author={Omer Dahary and Or Patashnik and Kfir Aberman and Daniel Cohen-Or},
    year={2024},
    eprint={2403.16990},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
 }