Home

Awesome

Learning Inclusion Matching for Animation Paint Bucket Colorization

Project Page | Video

<img src="assets/teaser.png" width="800px"/>

This repository provides the official implementation for the following paper:

<p> <div><strong>Learning Inclusion Matching for Animation Paint Bucket Colorization</strong></div> <div><a href="https://ykdai.github.io/">Yuekun Dai</a>, <a href="https://shangchenzhou.com/">Shangchen Zhou</a>, <a href="https://github.com/dienachtderwelt">Qinyue Li</a>, <a href="https://li-chongyi.github.io/">Chongyi Li</a>, <a href="https://www.mmlab-ntu.com/person/ccloy/">Chen Change Loy</a></div> <div>Accepted to <strong>CVPR 2024</strong></div><div><a href=https://arxiv.org/abs/2403.18342> arXiv </a> </p>

BasicPBC

Colorizing line art is a pivotal task in the production of hand-drawn cel animation. In this work, we introduce a new learning-based inclusion matching pipeline, which directs the network to comprehend the inclusion relationships between segments. To facilitate the training of our network, we also propose a unique dataset PaintBucket-Character. This dataset includes rendered line arts alongside their colorized counterparts, featuring various 3D characters.

Update

TODO

Installation

  1. Clone the repo

    git clone https://github.com/ykdai/BasicPBC.git
    
  2. Install dependent packages

    cd BasicPBC
    pip install -r requirements.txt
    
  3. Install BasicPBC
    Please run the following commands in the BasicPBC root path to install BasicPBC:

    python setup.py develop
    

Data Download

The details of our dataset can be found at this page. Dataset can be downloaded using the following links.

Google DriveBaidu NetdiskNumberDescription
PaintBucket-Character Train/Testlinklink11,345/3,0003D rendered frames for training and testing. Our dataset is a mere 2GB in size, so feel free to download it and enjoy exploring. 😆😆
PaintBucket-Real Test//200Hand-drawn frames for testing.

Due to copyright issues, we do not provide download links for the real hand-drawn dataset. Please contact us through the e-mail if you want to use it or wish to get project files of our dataset. These hand-drawn frames are only for evaluation and not for any commercial activities.

Pretrained Model

You can download the pretrained checkpoints from the following links. Please place it under the ckpt folder and unzip it, then you can run the basicsr/test.py for inference.

Google DriveBaidu Netdisk
BasicPBClinklink
BasicPBC-Lightlinklink

Model Inference

To estimate the colorized frames with our checkpoint trained on PaintBucket-Character, you can run the basicsr/test.py by using:

python basicsr/test.py -opt options/test/basicpbc_pbch_test_option.yml

Or you can test the lightweight model by:

python basicsr/test.py -opt options/test/basicpbc_light_test_option.yml

The colorized results will be saved at results/.

To play with your own data, put your anime clip(s) under dataset/test/. The clip folder should contain at least one colorized gt frame and line of all frames.
We also provide two simple examples: laughing_girl and smoke_explosion. To play with your own data, put your anime clip(s) under dataset/test/. The clip folder should contain at least one colorized gt frame and line of all frames.
We also provide two simple examples: laughing_girl and smoke_explosion.

├── dataset 
    ├── test
        ├── laughing_girl
            ├── gt
                ├── 0000.png
            ├── line
                ├── 0000.png
                ├── 0001.png
                ├── ...
        ├── smoke_explosion
            ├── gt
            ├── line

To inference on laughing_girl, run inference_line_frames.py by using:

python inference_line_frames.py --path dataset/test/laughing_girl

Or run this to try with smoke_explosion:

python inference_line_frames.py --path dataset/test/smoke_explosion/  --mode nearest

Find results under results/.

inference_line_frames.py provides several arguments for different use cases.

Model Training

Training with single GPU

To train a model with your own data/model, you can edit the options/train/basicpbc_pbch_train_option.yml and run the following command. To train a model with your own data/model, you can edit the options/train/basicpbc_pbch_train_option.yml and run the following command.

python basicsr/train.py -opt options/train/basicpbc_pbch_train_option.yml

Training with multiple GPU

You can run the following command for multiple GPU training:

CUDA_VISIBLE_DEVICES=0,1 bash scripts/dist_train.sh 2 options/train/basicpbc_pbch_train_option.yml

BasicPBC structure

├── BasicPBC
    ├── assets
    ├── basicsr
        ├── archs
        ├── data
        ├── losses
        ├── metrics
        ├── models
        ├── ops
        ├── utils
    ├── dataset
    	├── train
	    	├── PaintBucket_Char
        ├── test
        	├── PaintBucket_Char
        	├── PaintBucket_Real
    ├── experiments
    ├── options
        ├── test
        ├── train
    ├── paint
    ├── raft
    ├── results
    ├── scripts

License

This project is licensed under <a rel="license" href="https://github.com/ykdai/BasicPBC/blob/main/LICENSE">S-Lab License 1.0</a>. Redistribution and use of the dataset and code for non-commercial purposes should follow this license.

Citation

If you find this work useful, please cite:

@article{InclusionMatching2024,
  title     = {Learning Inclusion Matching for Animation Paint Bucket Colorization},
  author    = {Dai, Yuekun and Zhou, Shangchen and Li, Qinyue and Li, Chongyi and Loy, Chen Change},
  journal   = {CVPR},
  year      = {2024},
}

Contact

If you have any question, please feel free to reach me out at ydai005@e.ntu.edu.sg.