Home

Awesome

šŸˆ CatVTON: Concatenation Is All You Need for Virtual Try-On with Diffusion Models

<div style="display: flex; justify-content: center; align-items: center;"> <a href="http://arxiv.org/abs/2407.15886" style="margin: 0 2px;"> <img src='https://img.shields.io/badge/arXiv-2407.15886-red?style=flat&logo=arXiv&logoColor=red' alt='arxiv'> </a> <a href='https://huggingface.co/zhengchong/CatVTON' style="margin: 0 2px;"> <img src='https://img.shields.io/badge/Hugging Face-ckpts-orange?style=flat&logo=HuggingFace&logoColor=orange' alt='huggingface'> </a> <a href="https://github.com/Zheng-Chong/CatVTON" style="margin: 0 2px;"> <img src='https://img.shields.io/badge/GitHub-Repo-blue?style=flat&logo=GitHub' alt='GitHub'> </a> <a href="http://120.76.142.206:8888" style="margin: 0 2px;"> <img src='https://img.shields.io/badge/Demo-Gradio-gold?style=flat&logo=Gradio&logoColor=red' alt='Demo'> </a> <a href="https://huggingface.co/spaces/zhengchong/CatVTON" style="margin: 0 2px;"> <img src='https://img.shields.io/badge/Space-ZeroGPU-orange?style=flat&logo=Gradio&logoColor=red' alt='Demo'> </a> <a href='https://zheng-chong.github.io/CatVTON/' style="margin: 0 2px;"> <img src='https://img.shields.io/badge/Webpage-Project-silver?style=flat&logo=&logoColor=orange' alt='webpage'> </a> <a href="https://github.com/Zheng-Chong/CatVTON/LICENCE" style="margin: 0 2px;"> <img src='https://img.shields.io/badge/License-CC BY--NC--SA--4.0-lightgreen?style=flat&logo=Lisence' alt='License'> </a> </div>

CatVTON is a simple and efficient virtual try-on diffusion model with 1) Lightweight Network (899.06M parameters totally), 2) Parameter-Efficient Training (49.57M parameters trainable) and 3) Simplified Inference (< 8G VRAM for 1024X768 resolution).

<div align="center"> <img src="resource/img/teaser.jpg" width="85%" height="100%"/> </div>

Updates

Installation

An Installation Guide is provided to help build the conda environment for CatVTON. When deploying the app, you will need Detectron2 & DensePose, which are not required for inference on datasets. Install the packages according to your needs.

Deployment

ComfyUI Workflow

We have modified the main code to enable easy deployment of CatVTON on ComfyUI. Due to the incompatibility of the code structure, we have released this part in the Releases, which includes the code placed under custom_nodes of ComfyUI and our workflow JSON files.

To deploy CatVTON to your ComfyUI, follow these steps:

  1. Install all the requirements for both CatVTON and ComfyUI, refer to Installation Guide for CatVTON and Installation Guide for ComfyUI.
  2. Download ComfyUI-CatVTON.zip and unzip it in the custom_nodes folder under your ComfyUI project (clone from ComfyUI).
  3. Run the ComfyUI.
  4. Download catvton_workflow.json and drag it into you ComfyUI webpage and enjoy šŸ˜†!

Problems under Windows OS, please refer to issue#8.

When you run the CatVTON workflow for the first time, the weight files will be automatically downloaded, usually taking dozens of minutes.

<div align="center"> <img src="resource/img/comfyui-1.png" width="100%" height="100%"/> </div> <!-- <div align="center"> <img src="resource/img/comfyui.png" width="100%" height="100%"/> </div> -->

Gradio App

To deploy the Gradio App for CatVTON on your machine, run the following command, and checkpoints will be automatically downloaded from HuggingFace.

CUDA_VISIBLE_DEVICES=0 python app.py \
--output_dir="resource/demo/output" \
--mixed_precision="bf16" \
--allow_tf32 

When using bf16 precision, generating results with a resolution of 1024x768 only requires about 8G VRAM.

Inference

1. Data Preparation

Before inference, you need to download the VITON-HD or DressCode dataset. Once the datasets are downloaded, the folder structures should look like these:

ā”œā”€ā”€ VITON-HD
|   ā”œā”€ā”€ test_pairs_unpaired.txt
ā”‚   ā”œā”€ā”€ test
|   |   ā”œā”€ā”€ image
ā”‚   ā”‚   ā”‚   ā”œā”€ā”€ [000006_00.jpg | 000008_00.jpg | ...]
ā”‚   ā”‚   ā”œā”€ā”€ cloth
ā”‚   ā”‚   ā”‚   ā”œā”€ā”€ [000006_00.jpg | 000008_00.jpg | ...]
ā”‚   ā”‚   ā”œā”€ā”€ agnostic-mask
ā”‚   ā”‚   ā”‚   ā”œā”€ā”€ [000006_00_mask.png | 000008_00.png | ...]
...
ā”œā”€ā”€ DressCode
|   ā”œā”€ā”€ test_pairs_paired.txt
|   ā”œā”€ā”€ test_pairs_unpaired.txt
ā”‚   ā”œā”€ā”€ [dresses | lower_body | upper_body]
|   |   ā”œā”€ā”€ test_pairs_paired.txt
|   |   ā”œā”€ā”€ test_pairs_unpaired.txt
ā”‚   ā”‚   ā”œā”€ā”€ images
ā”‚   ā”‚   ā”‚   ā”œā”€ā”€ [013563_0.jpg | 013563_1.jpg | 013564_0.jpg | 013564_1.jpg | ...]
ā”‚   ā”‚   ā”œā”€ā”€ agnostic_masks
ā”‚   ā”‚   ā”‚   ā”œā”€ā”€ [013563_0.png| 013564_0.png | ...]
...

For the DressCode dataset, we provide script to preprocessed agnostic masks, run the following command:

CUDA_VISIBLE_DEVICES=0 python preprocess_agnostic_mask.py \
--data_root_path <your_path_to_DressCode> 

2. Inference on VTIONHD/DressCode

To run the inference on the DressCode or VITON-HD dataset, run the following command, checkpoints will be automatically downloaded from HuggingFace.

CUDA_VISIBLE_DEVICES=0 python inference.py \
--dataset [dresscode | vitonhd] \
--data_root_path <path> \
--output_dir <path> 
--dataloader_num_workers 8 \
--batch_size 8 \
--seed 555 \
--mixed_precision [no | fp16 | bf16] \
--allow_tf32 \
--repaint \
--eval_pair  

3. Calculate Metrics

After obtaining the inference results, calculate the metrics using the following command:

CUDA_VISIBLE_DEVICES=0 python eval.py \
--gt_folder <your_path_to_gt_image_folder> \
--pred_folder <your_path_to_predicted_image_folder> \
--paired \
--batch_size=16 \
--num_workers=16 

Acknowledgement

Our code is modified based on Diffusers. We adopt Stable Diffusion v1.5 inpainting as the base model. We use SCHP and DensePose to automatically generate masks in our Gradio App and ComfyUI workflow. Thanks to all the contributors!

License

All the materials, including code, checkpoints, and demo, are made available under the Creative Commons BY-NC-SA 4.0 license. You are free to copy, redistribute, remix, transform, and build upon the project for non-commercial purposes, as long as you give appropriate credit and distribute your contributions under the same license.

Citation

@misc{chong2024catvtonconcatenationneedvirtual,
 title={CatVTON: Concatenation Is All You Need for Virtual Try-On with Diffusion Models}, 
 author={Zheng Chong and Xiao Dong and Haoxiang Li and Shiyue Zhang and Wenqing Zhang and Xujie Zhang and Hanqing Zhao and Xiaodan Liang},
 year={2024},
 eprint={2407.15886},
 archivePrefix={arXiv},
 primaryClass={cs.CV},
 url={https://arxiv.org/abs/2407.15886}, 
}