Home

Awesome

diffusers-rs: A Diffusers API in Rust/Torch

Build Status Latest version Documentation License

rusty robot holding a torch

A rusty robot holding a fire torch, generated by stable diffusion using Rust and libtorch.

The diffusers crate is a Rust equivalent to Huggingface's amazing diffusers Python library. It is based on the tch crate. The implementation supports running Stable Diffusion v1.5 and v2.1.

Getting the weights

The weight files can be retrieved from the HuggingFace model repos and should be moved in the data/ directory.

# Add --sd_version 1.5 to get the v1.5 weights rather than the v2.1.
python3 ./scripts/get_weights.py

Running some example.

cargo run --example stable-diffusion --features clap -- --prompt "A rusty robot holding a fire torch."

The final image is named sd_final.png by default. The default scheduler is the Denoising Diffusion Implicit Model scheduler (DDIM). The original paper and some code can be found in the associated repo.

This generates some images of rusty robots holding some torches!

<img src="media/robot3.jpg" width=256><img src="media/robot4.jpg" width=256><img src="media/robot7.jpg" width=256>

<img src="media/robot8.jpg" width=256><img src="media/robot11.jpg" width=256><img src="media/robot13.jpg" width=256>

Image to Image Pipeline

The stable diffusion model can also be used to generate an image based on another image. The following command runs this image to image pipeline:

cargo run --example stable-diffusion-img2img --features clap -- --input-image media/in_img2img.jpg

The default prompt is "A fantasy landscape, trending on artstation.", but can be changed via the -prompt flag.

img2img input img2img output

Inpainting Pipeline

Inpainting can be used to modify an existing image based on a prompt and modifying the part of the initial image specified by a mask. This requires different unet weights unet-inpaint.safetensors that could also be retrieved from this repo and should also be placed in the data/ directory.

The following command runs this image to image pipeline:

wget https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png -O sd_input.png
wget https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png -O sd_mask.png
cargo run --example stable-diffusion-inpaint --features clap --input-image sd_input.png --mask-image sd_mask.png

The default prompt is "Face of a yellow cat, high resolution, sitting on a park bench.", but can be changed via the -prompt flag.

<img src="https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" width=256><img src="https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" width=256>

inpaint output

ControlNet Pipeline

The ControlNet architecture can be used to control how stable diffusion generate images. This is to be used with the weights for stable diffusion 1.5 (see how to get these above). Additional weights have to be retrieved from this HuggingFace repo and copied in data/controlnet.safetensors.

The ControlNet pipeline takes as input a sample image, in the default mode it will perform edge detection on this image using the Canny edge detector and will use the resulting edge image as a guide.

cargo run --example controlnet --features clap,image,imageproc -- \
  --prompt "a rusty robot, lit by a fire torch, hd, very detailed" \
  --input-image media/vermeer.jpg

The media/vermeer.jpg image is the well known painting on the left hand side, this results in the right hand side image after performing edge detection.

<img src="https://raw.githubusercontent.com/LaurentMazare/diffusers-rs/main/media/vermeer.jpg" width=256><img src="https://raw.githubusercontent.com/LaurentMazare/diffusers-rs/main/media/vermeer-edges.png" width=256>

Using only the edge detection image, the ControlNet model generate the following samples.

<img src="https://raw.githubusercontent.com/LaurentMazare/diffusers-rs/main/media/vermeer-out1.jpg" width=256><img src="https://raw.githubusercontent.com/LaurentMazare/diffusers-rs/main/media/vermeer-out2.jpg" width=256><img src="https://raw.githubusercontent.com/LaurentMazare/diffusers-rs/main/media/vermeer-out3.jpg" width=256>

FAQ

Memory Issues

This requires a GPU with more than 8GB of memory, as a fallback the CPU version can be used but is slower.

cargo run --example stable-diffusion --features clap -- --prompt "A very rusty robot holding a fire torch." --cpu all

For a GPU with 8GB, one can use the fp16 weights for the UNet and put only the UNet on the GPU.

PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6,max_split_size_mb:128 RUST_BACKTRACE=1 CARGO_TARGET_DIR=target2 cargo run \
    --example stable-diffusion --features clap -- --cpu vae --cpu clip \
    --unet-weights data/unet-fp16.safetensors