Awesome
WAIT
We provide official PyTorch implementation for:
WAIT: Feature Warping for Animation to Illustration video Translation using GANs
Dataset Stats:
Sample Images:
WAIT:
Here we compare the WAIT results with baseline methods. From left the right;
Input, CycleGAN, OpticalFlowWarping, ReCycleGAN, ReCycleGANv2, WAIT
WAIT results on AS Style:
WAIT results on BP Style:
Prerequisites
- Linux, macOS or Windows
- Python 3.6+
- NVIDIA GPU
- CUDA 11.1
- CuDNN 8.0.5
Getting Started
Downloading Datasets
Please refer to datasets.md for details.
Installation
- Clone this repo:
git clone https://github.com/giddyyupp/wait.git
cd wait
- Install PyTorch 1.5+ and torchvision from http://pytorch.org and other dependencies (e.g., visdom and dominate). You can install all the dependencies by
pip install -r requirements.txt
- Build Deformable Conv layers:
cd models/deform_conv
python setup.py install develop
WAIT Train & Test
-
Download a GANILLA illustrator dataset and corresponding animation movies (e.g. BP). For illustration dataset please follow the steps explained in Ganilla repository. For animations, we use Peter Rabbit movie to curate BP dataset, and videos from ZOG Youtube channel for AS dataset.
-
Train a model:
python train.py --dataroot ./datasets/bp_dataset --name bp_wait --model cycle_gan_warp --netG resnet_9blocks \
--centerCropSize 256 --resize_or_crop resize_and_centercrop --batch_size 8 --lr 0.0008 --niter_decay 200 --verbose \
--norm_warp "batch" --use_warp_speed_ups --rec_bug_fix --final_conv --merge_method "concat" --time_gap 5 \
--offset_network_block_cnt 10 --warp_layer_cnt 5
-
To view training results and loss plots, run
python -m visdom.server
and click the URL http://localhost:8097. To see more intermediate results, check out./checkpoints/bp_wait/web/index.html
-
Test the model: With assigning correct variables to dataset, EXP_ID, and backbone;
#!./scripts/test_warp_models.sh ./datasets/"$dataset" $EXP_ID $backbone $dataset --norm_warp "batch" --rec_bug_fix --use_warp_speed_ups --final_conv --merge_method "concat"
or
python test.py --dataroot ./datasets/bp_dataset --name bp_wait --model cycle_gan_warp --netG resnet_9blocks \
--centerCropSize 800 --resize_or_crop center_crop --no_flip --phase test --epoch 200 --time_gap 0 --norm_warp "batch" \
--rec_bug_fix --final_conv --merge_method "concat"
The test results will be saved to a html file here: ./results/bp_wait/latest_test/index.html
.
Calculate Metrics
- To calculate FID & MSE you can directly use our scripts in
scripts/metrics
directory.
cd scripts/metrics
./calculate_FID_batch.sh path_to_source path_to_result
We put 2 helper scripts in the metrics/FWE
folder, just copy paste them to the main directory of the above repo.
Now you can run calculate_FWE.sh
.
- We also provide a single script to calculate all metrics in single run.
cd scripts/
./calculate_metrics_all.sh path_to_wait_repo exp_name dataset_name path_to_fwe_repo
You can find more scripts at scripts
directory.
Apply a pre-trained model (WAIT)
- TODO!! You can download pretrained models using following link
Put a pretrained model under ./checkpoints/{name}_pretrained/200_net_G.pth
.
- Then generate the results using
python test.py --dataroot datasets/bp_wait/testB --name {name}_pretrained --model test
The option --model test
is used for generating results of WAIT only for one side.
python test.py --model cycle_gan
will require loading and generating results in both directions, which is sometimes unnecessary.
The results will be saved at ./results/
. Use --results_dir {directory_path_to_save_result}
to specify the results directory.
Citation
If you use this code for your research, please cite our papers.
@misc{hicsonmez2023wait,
title={WAIT: Feature Warping for Animation to Illustration video Translation using GANs},
author={Samet Hicsonmez and Nermin Samet and Fidan Samet and Oguz Bakir and Emre Akbas and Pinar Duygulu},
year={2023},
eprint={2310.04901},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
Acknowledgments
Our code is heavily inspired by GANILLA.
The numerical calculations reported in this work were fully performed at TUBITAK ULAKBIM, High Performance and Grid Computing Center (TRUBA resources).