Awesome
ComfyUI-InstanceDiffusion
ComfyUI nodes to use InstanceDiffusion.
Original research repo: https://github.com/frank-xwang/InstanceDiffusion
Table of Contents
Installation
How to Install
Clone or download this repo into your ComfyUI/custom_nodes/
directory.
There are no Python package requirements outside of the standard ComfyUI requirements at this time.
How to Configure Models
These models were trained by frank-xwang baked inside of StableDiffusion 1.5. These are spliced out into individual models to be used with other SD1.5 checkpoints.
Download each of these checkpoints and place them into the Installation Directory within ComfyUI/models/instance_models/
directory.
Model Name | URL | Installation Directory |
---|---|---|
fusers.ckpt | huggingface | instance_models/fuser_models/ |
positionnet.ckpt | huggingface | instance_models/positionnet_models/ |
scaleu.ckpt | huggingface | instance_models/scaleu_models/ |
Accompanying Node Repos
Examples
Text2Vid example using Kijai's Spline Editor
Vid2Vid examples
Example workflows can be found in the example_workflows/
directory.
Unsupported Features
InstanceDiffusion supports a wide range of inputs. The inputs that do not have nodes that can convert their input into InstanceDiffusion:
- Scribbles
- Points
- Segments
- Masks
Points, segments, and masks are planned todo after proper tracking for these input types is implemented in ComfyUI.
Acknowledgements
- frank-xwang for creating the original repo, training models, etc.
- Kosinkadink for creating AnimateDiff-Evolved and providing support on integration
- Kijai for improving the speed and adding tracking nodes