Awesome
NTIRE 2023 Video Colorization Challenge @ CVPR 2023
Track 1: Fréchet Inception Distance (FID) Optimization
Please visit test_NTIRE23_Track_1_FID.py to evaluate our model.
We provide the colorized images HERE, and the reference images used to obtain the results HERE.
News
- [2024-04-12] New project colormnet would be released soon, which featuring increased speed, reduced GPU memory requirements, better performance, and only requires a single reference image for inference to colorize hundreds or even thousands grayscale frames.
- [2024-04-12] Add training dataloader support (see branch DeepExemplar) for excellent work DeepExemplar, as suggested by @doolachen.
- [2023-12-05] Integrated to 🐼 OpenXLab. Try out online demo! .
- [2023-12-05] Add inference code using two reference images, see test_BiSTNet.py.
- [2023-12-02] Colab demo for BiSTNet is available .
:briefcase: Dependencies and Installation
-
PyTorch >= 1.8.0 (please do not use 2.0.1)
-
CUDA >= 10.2
-
mmcv == 1.x
-
mmediting == 0.x
-
Other required packages
# git clone this repository git clone https://github.com/yyang181/NTIRE23-VIDEO-COLORIZATION.git cd NTIRE23-VIDEO-COLORIZATION
Environment configuration:
cd BiSTNet-NTIRE2023
# create a new anaconda env
conda create -n bistnet python=3.6
conda activate bistnet
# install pytortch
conda install pytorch==1.10.1 torchvision==0.11.2 torchaudio==0.10.1 cudatoolkit=11.3 -c pytorch -c conda-forge
# mmcv install (1.x, please do not use 2.x)
pip install -U openmim
mim install mmcv-full
# install mmediting (0.x, please do not use 1.x)
git clone https://github.com/open-mmlab/mmediting.git
cd mmediting
pip3 install -e .
# install other pip pkgs
cd .. && pip install -r pip_requirements.txt
:gift: Checkpoints
Name | URL | Script | FID | CDC |
---|---|---|---|---|
BiSTNet | model | test_NTIRE23_Track_1_FID.py | 21.5372 | 0.001717 |
:zap: Quick Inference for NTIRE 2023 Video Colorization Challenge
This version is specifically designed for NTIRE 2023 Video Colorization Challenge @ CVPR 2023. We colorize every 50 frames with two exemplars, see clip 001 for an example.
- Download Pre-trained Models: download a pretrained colorization model from the tabulated links, and put it into the folder
./BiSTNet-NTIRE2023/
, like./BiSTNet-NTIRE2023/checkpoints
,./BiSTNet-NTIRE2023/data
and./BiSTNet-NTIRE2023/models/protoseg_core/checkpoints
. - Prepare Testing Data: You can put the testing images in a folder, like clip 001 at
./demo_dataset1
.demo_dataset1/input
: the directory of input grayscale images.demo_dataset1/ref
: the directory of reference images (onlyf001.png, f050.png and f100.png
are colorful images).demo_dataset1/output
: the directory to save the colorization results.
- Test on Images:
conda activate bistnet && cd BiSTNet-NTIRE2023
CUDA_VISIBLE_DEVICES=0 python test_NTIRE23_Track_1_FID.py
For more details please refer to test_NTIRE23_Track_1_FID.py.
:zap: Quick Inference for BiSTNet version
This version allows arbitrary number of input video frames with two exemplars. See clip fanghua234 for an example.
-
Prepare Testing Data: You can put the testing images in a folder, like fanghua234 at
./demo_dataset2
.-
demo_dataset2/input
: the directory of input grayscale images. -
demo_dataset2/ref
: the directory of reference images (frame0000.png, frame0150.png
). -
demo_dataset2/output
: the directory to save the colorization results.
-
-
Test on Images:
conda activate bistnet && cd BiSTNet-NTIRE2023
CUDA_VISIBLE_DEVICES=0 python test_BiSTNet.py
For more details please refer to test_BiSTNet.py.
License
BiSTNet is released under the MIT license, while some methods adopted in this project are with other licenses. Please refer to LICENSES.md for the careful check, if you are using our code for commercial matters. Thank @milmor so much for bringing up this concern about license.
Citation
If this work is helpful for your research, please consider citing the following entry.
@article{bistnet,
title={BiSTNet: Semantic Image Prior Guided Bidirectional Temporal Feature Fusion for Deep Exemplar-based Video Colorization},
author={Yang, Yixin and Peng, Zhongzheng and Du, Xiaoyu and Tao, Zhulin and Tang, Jinhui and Pan, Jinshan},
journal={arXiv preprint arXiv:2212.02268},
year={2022}
}
Acknowledgement
Part of our codes are taken from DeepExemplar, RAFT, HED and ProtoSeg. Thanks for their awesome works.