Home

Awesome

<div align="center">

Docker image for A1111 Stable Diffusion Web UI, Kohya_ss, ComfyUI and InvokeAI

GitHub Repo Docker Image Version (latest semver) RunPod.io Template <br> Docker Pulls Template Version

</div>

Now with SDXL support.

Installs

Available on RunPod

This image is designed to work on RunPod. You can use my custom RunPod template to launch it on RunPod.

Building the Docker image

[!NOTE] You will need to edit the docker-bake.hcl file and update REGISTRY_USER, and RELEASE. You can obviously edit the other values too, but these are the most important ones.

[!IMPORTANT] In order to cache the models, you will need at least 32GB of CPU/system memory (not VRAM) due to the large size of the models. If you have less than 32GB of system memory, you can comment out or remove the code in the Dockerfile that caches the models.

# Clone the repo
git clone https://github.com/ashleykleynhans/stable-diffusion-docker.git

# Download the models
cd stable-diffusion-docker
wget https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.safetensors
wget https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.safetensors
wget https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0.safetensors
wget https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/resolve/main/sd_xl_refiner_1.0.safetensors
wget https://huggingface.co/madebyollin/sdxl-vae-fp16-fix/resolve/main/sdxl_vae.safetensors

# Log in to Docker Hub
docker login

# Build the image, tag the image, and push the image to Docker Hub
docker buildx bake -f docker-bake.hcl --push

# Same as above but customize registry/user/release:
REGISTRY=ghcr.io REGISTRY_USER=myuser RELEASE=my-release docker buildx \
    bake -f docker-bake.hcl --push

Running Locally

Install Nvidia CUDA Driver

Start the Docker container

docker run -d \
  --gpus all \
  -v /workspace \
  -p 2999:2999 \
  -p 3000:3001 \
  -p 3010:3011 \
  -p 3020:3021 \
  -p 6006:6066 \
  -p 7777:7777 \
  -p 8000:8000 \
  -p 8888:8888 \
  -p 9090:9090 \
  -e VENV_PATH=/workspace/venvs/a1111 \
  -e JUPYTER_PASSWORD=Jup1t3R! \
  -e ENABLE_TENSORBOARD=1 \
  ashleykza/stable-diffusion-webui:latest

You can obviously substitute the image name and tag with your own.

Ports

Connect PortInternal PortDescription
30003001A1111 Stable Diffusion Web UI
30103011Kohya_ss
30203021ComfyUI
90909090InvokeAI
60066066Tensorboard
77777777Code Server
80008000Application Manager
88888888Jupyter Lab
29992999RunPod File Uploader

Environment Variables

VariableDescriptionDefault
VENV_PATHSet the path for the Python venv for the app/workspace/venvs/a1111
JUPYTER_LAB_PASSWORDSet a password for Jupyter labnot set - no password
DISABLE_AUTOLAUNCHDisable Web UIs from launching automatically(not set)
DISABLE_SYNCDisable syncing if using a RunPod network volume(not set)
ENABLE_TENSORBOARDEnables Tensorboard on port 6006enabled

Logs

Stable Diffusion Web UI, Kohya SS, ComfyUI, and InvokeAI each create log files, and you can tail the log files instead of killing the services to view the logs

ApplicationLog file
Stable Diffusion Web UI/workspace/logs/webui.log
Kohya SS/workspace/logs/kohya_ss.log
ComfyUI/workspace/logs/comfyui.log
InvokeAI/workspace/logs/invokeai.log

For example:

tail -f  /workspace/logs/webui.log

Community and Contributing

Pull requests and issues on GitHub are welcome. Bug fixes and new features are encouraged.

Appreciate my work?

<a href="https://www.buymeacoffee.com/ashleyk" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>