Home

Awesome

DCT-Net: Domain-Calibrated Translation for Portrait Stylization

Project page | Video | Paper

Official implementation of DCT-Net for Full-body Portrait Stylization.

DCT-Net: Domain-Calibrated Translation for Portrait Stylization,
Yifang Men<sup>1</sup>, Yuan Yao<sup>1</sup>, Miaomiao Cui<sup>1</sup>, Zhouhui Lian<sup>2</sup>, Xuansong Xie<sup>1</sup>,
<sup>1</sup>DAMO Academy, Alibaba Group, Beijing, China
<sup>2</sup>Wangxuan Institute of Computer Technology, Peking University, China
In: SIGGRAPH 2022 (TOG) arXiv preprint

<a href="https://colab.research.google.com/github/menyifang/DCT-Net/blob/main/notebooks/inference.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="google colab logo"></a> Hugging Face Spaces

Demo

demo

News

(2023-03-14) The training guidance has been released, train DCT-Net with your own style data.

(2023-02-20) Two new style pre-trained models (design, illustration) trained with combined DCT-Net and Stable-Diffusion are provided. The training guidance will be released soon.

(2022-10-09) The multi-style pre-trained models (3d, handdrawn, sketch, artstyle) and usage are available now.

(2022-08-08) The pertained model and infer code of 'anime' style is available now. More styles coming soon.

(2022-08-08) cartoon function can be directly call from pythonSDK.

(2022-07-07) The paper is available now at arxiv(https://arxiv.org/abs/2207.02426).

Web Demo

Requirements

Quick Start

<a href="https://colab.research.google.com/github/menyifang/DCT-Net/blob/main/notebooks/inference.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="google colab logo"></a>

git clone https://github.com/menyifang/DCT-Net.git
cd DCT-Net

Installation

conda create -n dctnet python=3.7
conda activate dctnet
pip install --upgrade tensorflow-gpu==1.15 # GPU support, use tensorflow for CPU only
pip install "modelscope[cv]==1.3.2" -f https://modelscope.oss-cn-beijing.aliyuncs.com/releases/repo.html
pip install "modelscope[multi-modal]==1.3.2" -f https://modelscope.oss-cn-beijing.aliyuncs.com/releases/repo.html

Downloads

<img src="assets/sim_anime.png" width="200px"><img src="assets/sim_3d.png" width="200px"><img src="assets/sim_handdrawn.png" width="200px"><img src="assets/sim_sketch.png" width="200px"><img src="assets/sim_artstyle.png" width="200px">
anime3dhanddrawnsketchartstyle
<img src="assets/sim_design.png" width="200px"><img src="assets/sim_illu.png" width="200px">
designillustration

Pre-trained models in different styles can be downloaded by

python download.py

Inference

python run_sdk.py
python run.py

Video cartoonization

demo_vid

video can be directly processed as image sequences, style choice [option: anime, 3d, handdrawn, sketch, artstyle, sd-design, sd-illustration]

python run_vid.py --style anime

Training

<a href="https://colab.research.google.com/github/menyifang/DCT-Net/blob/main/notebooks/fastTrain.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="google colab logo"></a>

Data preparation

face_photo: face dataset such as [FFHQ](https://github.com/NVlabs/ffhq-dataset) or other collected real faces.
face_cartoon: 100-300 cartoon face images in a specific style, which can be self-collected or synthsized with generative models.

Due to the copyrighe issues, we can not provide collected cartoon exemplar for training. You can produce cartoon exemplars with the style-finetuned Stable-Diffusion (SD) models, which can be downloaded from modelscope or huggingface hubs.

The effects of some style-finetune SD models are as follows:

<img src="assets/sim1.png" width="240px"><img src="assets/sim2.png" width="240px"><img src="assets/sim3.png" width="240px"><img src="assets/sim4.png" width="240px"><img src="assets/sim5.png" width="240px">
designwatercolorillustrationclipartflat
python generate_data.py --style clipart

extract aligned faces from raw style images:

python extract_align_faces.py --src_dir 'data/raw_style_data'

install environment required by stylegan2-pytorch

cd source/stylegan2
python prepare_data.py '../../data/face_cartoon' --size 256 --out '../../data/stylegan2/traindata'
python train_condition.py --name 'ffhq_style_s256' --path '../../data/stylegan2/traindata' --config config/conf_server_train_condition_shell.json

after training, generated content calibrated samples via:

python style_blend.py --name 'ffhq_style_s256'
python generate_blendmodel.py --name 'ffhq_style_s256' --save_dir '../../data/face_cartoon/syn_style_faces'

run geometry calibration for both photo and cartoon:

cd source
python image_flip_agument_parallel.py --data_dir '../data/face_cartoon'
python image_scale_agument_parallel_flat.py --data_dir '../data/face_cartoon'
python image_rotation_agument_parallel_flat.py --data_dir '../data/face_cartoon'

The dataset structure is recommended as:

+—data
|   +—face_photo
|   +—face_cartoon

resume training from pretrained model in similar style,

style can be chosen from 'anime, 3d, handdrawn, sketch, artstyle, sd-design, sd-illustration'

python train_localtoon.py --data_dir PATH_TO_YOU_DATA --work_dir PATH_SAVE --style anime

Acknowledgments

Face detector and aligner are adapted from Peppa_Pig_Face_Engine and InsightFace.

Citation

If you find this code useful for your research, please use the following BibTeX entry.

@inproceedings{men2022dct,
  title={DCT-Net: Domain-Calibrated Translation for Portrait Stylization},
  author={Men, Yifang and Yao, Yuan and Cui, Miaomiao and Lian, Zhouhui and Xie, Xuansong},
  journal={ACM Transactions on Graphics (TOG)},
  volume={41},
  number={4},
  pages={1--9},
  year={2022},
  publisher={ACM New York, NY, USA}
}