Home

Awesome

<h1 align="center">Optimized Stable Diffusion</h1> <p align="center"> <img src="https://img.shields.io/github/last-commit/basujindal/stable-diffusion?logo=Python&logoColor=green&style=for-the-badge"/> <img src="https://img.shields.io/github/issues/basujindal/stable-diffusion?logo=GitHub&style=for-the-badge"/> <img src="https://img.shields.io/github/stars/basujindal/stable-diffusion?logo=GitHub&style=for-the-badge"/> </p>

This repo is a modified version of the Stable Diffusion repo, optimized to use less VRAM than the original by sacrificing inference speed.

To reduce the VRAM usage, the following opimizations are used:

<h1 align="center">Installation</h1>

All the modified files are in the optimizedSD folder, so if you have already cloned the original repository you can just download and copy this folder into the original instead of cloning the entire repo. You can also clone this repo and follow the same installation steps as the original (mainly creating the conda environment and placing the weights at the specified location).

Alternatively, if you prefer to use Docker, you can do the following:

  1. Install Docker, Docker Compose plugin, and NVIDIA Container Toolkit
  2. Clone this repo to, e.g., ~/stable-diffusion
  3. Put your downloaded model.ckpt file into ~/sd-data (it's a relative path, you can change it in docker-compose.yml)
  4. cd into ~/stable-diffusion and execute docker compose up --build

This will launch gradio on port 7860 with txt2img. You can also use docker compose run to execute other Python scripts.

<h1 align="center">Usage</h1>

img2img

python optimizedSD/optimized_img2img.py --prompt "Austrian alps" --init-img ~/sketch-mountains-input.jpg --strength 0.8 --n_iter 2 --n_samples 5 --H 512 --W 512

txt2img

python optimizedSD/optimized_txt2img.py --prompt "Cyberpunk style image of a Tesla car reflection in rain" --H 512 --W 512 --seed 27 --n_iter 2 --n_samples 5 --ddim_steps 50

inpainting

<h1 align="center">Using the Gradio GUI</h1> <h1 align="center">Arguments</h1>

--seed

Seed for image generation, can be used to reproduce previously generated images. Defaults to a random seed if unspecified.

--n_samples

Batch size/amount of images to generate at once.

--n_iter

Run x amount of times

--H & --W

Height & width of the generated image.

--turbo

Increases inference speed at the cost of extra VRAM usage.

--precision autocast or --precision full

Whether to use full or mixed precision

--format png or --format jpg

Output image format

--unet_bs

Batch size for the unet model

<h1 align="center">Weighted Prompts</h1>

Troubleshooting

Green colored output images

Changelog