Home

Awesome

<div align="center"> <h1>Scenimefy: Learning to Craft Anime Scene via Semi-Supervised Image-to-Image Translation</h1> <div> <a href="https://yuxinn-j.github.io/" target="_blank">Yuxin Jiang</a><sup>*</sup>, <a href="https://liming-jiang.com/" target="_blank">Liming Jiang</a><sup>*</sup>, <a href="https://williamyang1991.github.io/" target="_blank">Shuai Yang</a>, <a href="https://www.mmlab-ntu.com/person/ccloy/" target="_blank">Chen Change Loy</a> </div> <div> MMLab@NTU affiliated with S-Lab, Nanyang Technological University </div> <div> In ICCV 2023. </div>

:page_with_curl:Paper | :globe_with_meridians:Project Page | :open_file_folder:Anime Scene Dataset | 🤗Demo

</br> <div style="width: 100%; text-align: center; margin:auto;"> <img style="width:100%" src="assets/teaser.png"> </div> </div>

Updates


:wrench: Installation

  1. Clone this repo:
    git clone https://github.com/Yuxinn-J/Scenimefy.git
    cd Scenimefy
    
  2. Install dependent packages: After installing Anaconda, create a new Conda environment using conda env create -f Semi_translation/environment.yml.

:zap: Quick Inference

  1. Python script 2. Gradio demo

Python script

Gradio demo

:train: Quick I2I Train

Dataset Preparation

Training

Refer to the ./Semi_translation/script/train.sh file, or use the following command:

python train.py --name exp_shinkai  --CUT_mode CUT --model semi_cut \ 
--dataroot ./datasets/unpaired_s2a --paired_dataroot ./datasets/pair_s2a \ 
--checkpoints_dir ./pretrained_models \
--dce_idt --lambda_VGG -1  --lambda_NCE_s 0.05 \ 
--use_curriculum  --gpu_ids 0

:checkered_flag: Start From Scratch

StyleGAN Finetuning [TODO]

Segmenation Selection

:open_file_folder: Anime Scene Dataset

anime-dataset It is a high-quality anime scene dataset comprising 5,958 images with the following features:

In compliance with copyright regulations, we cannot directly release the anime images. However, you can conveniently prepare the dataset following instructions here.

:love_you_gesture: Citation

If you find this work useful for your research, please consider citing our paper:

@inproceedings{jiang2023scenimefy,
  title={Scenimefy: Learning to Craft Anime Scene via Semi-Supervised Image-to-Image Translation},
  author={Jiang, Yuxin and Jiang, Liming and Yang, Shuai and Loy, Chen Change},
  booktitle={ICCV},
  year={2023}
}

:hugs: Acknowledgments

Our code is mainly developed based on Cartoon-StyleGAN and Hneg_SRC. We thank facebook for their contribution of Mask2Former.

:newspaper_roll: License

Distributed under the S-Lab License. See LICENSE.md for more information.