Home

Awesome

[CVPR2024 Highlight] MIGC: Multi-Instance Generation Controller for Text-to-Image Synthesis

COCO-MIG Bench: PWC

Online Demo on Colab: Open In Colab

[MIGC Paper] [MIGC++ Paper] [Project Page] [ZhiHu(ηŸ₯乎)]

πŸ”₯πŸ”₯πŸ”₯ News

Demo2

To Do List

Gallery

attr_control quantity_control animation_creation

<a id="Installation"></a>

Installation

Conda environment setup

conda create -n MIGC_diffusers python=3.9 -y
conda activate MIGC_diffusers
pip install -r requirement.txt
pip install -e .

Checkpoints

Download the MIGC_SD14.ckpt (219M) and put it under the 'pretrained_weights' folder.

β”œβ”€β”€ pretrained_weights
β”‚   β”œβ”€β”€ MIGC_SD14.ckpt
β”œβ”€β”€ migc
β”‚   β”œβ”€β”€ ...
β”œβ”€β”€ bench_file
β”‚   β”œβ”€β”€ ...

Single Image Generation

By using the following command, you can quickly generate an image with MIGC.

CUDA_VISIBLE_DEVICES=0 python inference_single_image.py

The following is an example of the generated image based on stable diffusion v1.4.

<p align="center"> <img src="figures/MIGC_SD14_out.png" alt="example" width="200" height="200"/> <img src="figures/MIGC_SD14_out_anno.png" alt="example_annotation" width="200" height="200"/> </p>

πŸš€ Enhanced Attribute Control: For those seeking finer control over attribute management, consider exploring the python inferencev2_single_image.py script. This advanced version, InferenceV2, offers a significant improvement in mitigating attribute leakage issues. By accepting a slight increase in inference time, it enhances the Instance Success Ratio from 66% to an impressive 68% on COCO-MIG Benchmark. It is worth mentioning that increasing the NaiveFuserSteps in inferencev2_single_image.py can also gain stronger attribute control.

<p align="center"> <img src="figures/infer_v2_demo.png" alt="example" width="700" height="300"/> </p>

πŸ’‘ Versatile Image Generation: MIGC stands out as a plug-and-play controller, enabling the creation of images with unparalleled variety and quality. By simply swapping out different base generator weights, you can achieve results akin to those showcased in our Gallery. For instance:

<p align="center"> <img src="figures/diverse_base_model.png" alt="example" width="1000" height="230"/> </p>

[New] 🌈 Iterative Editing Mode: The Consistent-MIG algorithm improves the iterative MIG capabilities of MIGC facilitating modifying certain instances in MIG while preserving consistency in unmodified regions and maximizing the ID consistency of modified instances. You can explore the python inference_consistent_mig.py script to know the usage. For instance:

<p align="center"> <img src="figures/consistent-mig.jpg" alt="example" /> </p>

Training

Due to company requirements, we are unable to open the MIGC training code. For now, the best we can do is to provide the community with the script we use to process the COCO dataset data (i.e., obtaining each instance's box and caption). The relevant code is placed in the 'data_preparation' folder. If there are any changes in the future, such as if they grant permission, we will make it open source.

COCO-MIG Bench

To validate the model's performance in position and attribute control, we designed the COCO-MIG benchmark for evaluation and validation.

By using the following command, you can quickly run inference on our method on the COCO-MIG bench:

CUDA_VISIBLE_DEVICES=0 python inference_mig_benchmark.py

We sampled 800 images and compared MIGC with InstanceDiffusion, GLIGEN, etc. On COCO-MIG Benchmark, the results are shown below.

<table style="text-align: center;"> <thead> <tr> <th rowspan="2" style="text-align: center;">Method</th> <th colspan="6" style="text-align: center;">MIOU↑</th> <th colspan="6" style="text-align: center;">Instance Success Rate↑</th> <th rowspan="2" style="text-align: center;">Model Type</th> <th rowspan="2" style="text-align: center;">Publication</th> </tr> <tr> <th>L2</th> <th>L3</th> <th>L4</th> <th>L5</th> <th>L6</th> <th>Avg</th> <th>L2</th> <th>L3</th> <th>L4</th> <th>L5</th> <th>L6</th> <th>Avg</th> </tr> </thead> <tbody> <tr> <td><a href="https://github.com/showlab/BoxDiff">Box-Diffusion</a></td> <td>0.37</td> <td>0.33</td> <td>0.25</td> <td>0.23</td> <td>0.23</td> <td>0.26</td> <td>0.28</td> <td>0.24</td> <td>0.14</td> <td>0.12</td> <td>0.13</td> <td>0.16</td> <td>Training-free</td> <td>ICCV2023</td> </tr> <tr> <td><a href="https://github.com/gligen/GLIGEN">Gligen</a></td> <td>0.37</td> <td>0.29</td> <td>0.253</td> <td>0.26</td> <td>0.26</td> <td>0.27</td> <td>0.42</td> <td>0.32</td> <td>0.27</td> <td>0.27</td> <td>0.28</td> <td>0.30</td> <td>Adapter</td> <td>CVPR2023</td> </tr> <tr> <td><a href="https://github.com/microsoft/ReCo">ReCo</a></td> <td>0.55</td> <td>0.48</td> <td>0.49</td> <td>0.47</td> <td>0.49</td> <td>0.49</td> <td>0.63</td> <td>0.53</td> <td>0.55</td> <td>0.52</td> <td>0.55</td> <td>0.55</td> <td>Full model tuning</td> <td>CVPR2023</td> </tr> <tr> <td><a href="https://github.com/frank-xwang/InstanceDiffusion">InstanceDiffusion</a></td> <td>0.52</td> <td>0.48</td> <td>0.50</td> <td>0.42</td> <td>0.42</td> <td>0.46</td> <td>0.58</td> <td>0.52</td> <td>0.55</td> <td>0.47</td> <td>0.47</td> <td>0.51</td> <td>Adapter</td> <td>CVPR2024</td> </tr> <tr> <td><a href="https://github.com/limuloo/MIGC">Ours</a></td> <td><b>0.64</b></td> <td><b>0.58</b></td> <td><b>0.57</b></td> <td><b>0.54</b></td> <td><b>0.57</b></td> <td><b>0.56</b></td> <td><b>0.74</b></td> <td><b>0.67</b></td> <td><b>0.67</b></td> <td><b>0.63</b></td> <td><b>0.66</b></td> <td><b>0.66</b></td> <td>Adapter</td> <td>CVPR2024</td> </tr> </tbody> </table>

MIGC-GUI

We have combined MIGC and GLIGEN-GUI to make art creation more convenient for users. πŸ””This GUI is still being optimized. If you have any questions or suggestions, please contact me at zdw1999@zju.edu.cn.

Demo1

Start with MIGC-GUI

Step 1: Download the MIGC_SD14.ckpt and place it in pretrained_weights/MIGC_SD14.ckpt. 🚨If you have already completed this step during the Installation phase, feel free to skip it.

Step 2: Download the CLIPTextModel and place it in migc_gui_weights/clip/text_encoder/pytorch_model.bin.

Step 3: Download the CetusMix model and place it in migc_gui_weights/sd/cetusMix_Whalefall2.safetensors. Alternatively, you can visit civitai to download other models of your preference and place them in migc_gui_weights/sd/.

β”œβ”€β”€ pretrained_weights
β”‚   β”œβ”€β”€ MIGC_SD14.ckpt
β”œβ”€β”€ migc_gui_weights
β”‚   β”œβ”€β”€ sd
β”‚   β”‚   β”œβ”€β”€ cetusMix_Whalefall2.safetensors
β”‚   β”œβ”€β”€ clip
β”‚   β”‚   β”œβ”€β”€ text_encoder
β”‚   β”‚   β”‚   β”œβ”€β”€ pytorch_model.bin
β”œβ”€β”€ migc_gui
β”‚   β”œβ”€β”€ app.py

Step 4: cd migc_gui

Step 5: Launch the application by running python app.py --port=3344. You can now access the MIGC GUI through http://localhost:3344/. Feel free to switch the port as per your convenience.

Consistent-MIG in MIGC-GUI

Demo2

<p align="center"> <img src="figures/edit_button.jpg" alt="example" style="width: 50%; height: auto;"/> </p>

Tick the button EditMode in area IMAGE DIMENSIONS and try it!

MIGC + LoRA

MIGC can achieve powerful attribute-and-position control capabilities while combining with LoRA. πŸš€ We will integrate this function into MIGC-GUI in the future, so stay tuned! πŸŒŸπŸ‘€

<p align="center"> <img src="figures/migc_lora_id.png" alt="migc_lora_id" width="190" height="300"/> <img src="figures/migc_lora.png" alt="migc_lora" width="190" height="300"/> <img src="figures/migc_lora_anno.png" alt="migc_lora_anno" width="190" height="300"/> <img src="figures/migc_lora_gui_creation.png" alt="migc_lora_gui_creation" width="580" height="300"/> </p>

Ethical Considerations

The broad spectrum of image creation possibilities offered by MIGC might present comparable ethical dilemmas to those encountered with numerous other methods of generating images from text.

🏫About us

Thank you for your interest in this project. The project is supervised by the ReLER Lab at Zhejiang University’s College of Computer Science and Technology and HUAWEI. ReLER was established by Yang Yi, a Qiu Shi Distinguished Professor at Zhejiang University. Our dedicated team of contributors includes Dewei Zhou, You Li, Ji Xie, Fan Ma, Zongxin Yang, Yi Yang.

Contact us

If you have any questions, feel free to contact me via email zdw1999@zju.edu.cn

Acknowledgements

Our work is based on stable diffusion, diffusers, CLIP, and GLIGEN-GUI. We appreciate their outstanding contributions.

Citation

If you find this repository useful, please use the following BibTeX entry for citation.

@article{Zhou2024MIGCMG,
  title={MIGC: Multi-Instance Generation Controller for Text-to-Image Synthesis},
  author={Dewei Zhou and You Li and Fan Ma and Zongxin Yang and Yi Yang},
  journal={2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2024},
  pages={6818-6828},
  url={https://api.semanticscholar.org/CorpusID:267547419}
}

@article{Zhou2024MIGCAM,
  title={MIGC++: Advanced Multi-Instance Generation Controller for Image Synthesis},
  author={Dewei Zhou and You Li and Fan Ma and Zongxin Yang and Yi Yang},
  journal={ArXiv},
  year={2024},
  volume={abs/2407.02329},
  url={https://api.semanticscholar.org/CorpusID:270878014}
}