Home

Awesome

QuantArt

Official PyTorch implementation of the paper:

QuantArt: Quantizing Image Style Transfer Towards High Visual Fidelity
Siyu Huang<sup>*</sup> (Harvard), Jie An<sup>*</sup> (Rochester), Donglai Wei (BC), Jiebo Luo (Rochester), Hanspeter Pfister (Harvard)
CVPR 2023

We devise a new style transfer framework called QuantArt for high visual-fidelity stylization. The core idea is to push latent representation of generated artwork toward centroids of real artwork distribution with vector quantization. QuantArt achieves decent performance for various image style transfer tasks.

<p align='center'> <img alt='Thumbnail' src='imgs/thumb.png'> </p>

Dependencies

We recommend to use conda to create a new environment with all dependencies installed.

conda env create -f environment.yaml
conda activate quantart

Quick Example of Landscape Style Transfer

Download pre-trained landscape2art model and put it under logs/. Run

bash test.sh

The stylized landscape images (from imgs/) will be saved in logs/.

Datasets and Pre-trained Models

The pretrained models of this repo are temporarily unavailable due to an account issue in Google Drive. I will try to fix it when I have some free time.

Stage-1: The datasets and pre-trained models for codebook pretraining are as follows:

DatasetPre-trained Model
MS_COCOvqgan_imagenet_f16_1024.ckpt
WikiArtvqgan_wikiart_f16_1024.ckpt
LandscapesHQvqgan_landscape_f16_1024.ckpt
FFHQvqgan_faceshq_f16_1024.ckpt
Metfacesvqgan_metfaces_f16_1024.ckpt

Stage-2: The datasets and pre-trained models for style transfer experiments are as follows:

TaskPre-trained ModelContentStyle
photo->artworkcoco2artMS_COCOWikiArt
landscape->artworklandscape2artLandscapesHQWikiArt
landscape->artwork (non-VQ)landscape2art_continuousLandscapesHQWikiArt
face->artworkface2artFFHQMetfaces
artwork->artworkart2artWikiArtWikiArt
photo->photococo2cocoMS_COCOMS_COCO
landscape->landscapelandscape2landscapeLandscapesHQLandscapesHQ

Testing

Follow Datasets and Pre-trained Models to download more datasets and pretrained models. For instance for landscape-to-artwork style transfer model, the folder structure should be

QuantArt
├── configs
├── datasets
│   ├── lhq_1024_jpg
│   │   ├── lhq_1024_jpg
│   │   │   ├── 0000000.jpg
│   │   │   ├── 0000001.jpg
│   │   │   ├── 0000002.jpg
│   │   │   ├── ...
│   ├── painter-by-numbers
│   │   ├── train
│   │   │   ├── 100001.jpg
│   │   │   ├── 100002.jpg
│   │   │   ├── 100003.jpg
│   │   │   ├── ...
│   │   ├── test
│   │   │   ├── 0.jpg
│   │   │   ├── 100000.jpg
│   │   │   ├── 100004.jpg
│   │   │   ├── ...
├── logs
│   ├── landscape2art
│   │   ├── checkpoints
│   │   ├── configs
├── taming
├── environment.yaml
├── main.py
├── train.sh
└── test.sh

Run the following command to test the pre-trained model on the testing dataset:

python -u main.py --base logs/landscape2art/configs/test.yaml -n landscape2art -t False --gpus 0,

Training

Stage-1: Prepare WikiArt dataset as above. Download file lists painter-by-numbers-train.txt and painter-by-numbers-test.txt, put them under datasets/. Run the following command to train a Stage-1 model (i.e., an autoencoder and a codebook). Four GPUs are recommended but not necessary.

python -u main.py --base configs/vqgan_wikiart.yaml -t True --gpus 0,1,2,3

Two separate Stage-1 models are required for content and style datasets, respectively.

Stage-2: Run bash train.sh or the following command to train a photo-to-artwork model

python -u main.py --base configs/coco2art.yaml -t True --gpus 0,

More training configs of Stage-2 models can be found in configs/.

Custom Dataset

Unpaired data: To test unpaired data, follow comments in configs/custom_unpaired.yaml to specify model checkpoints and data paths. Then run

python -u main.py --base configs/custom_unpaired.yaml -n custom_unpaired -t False --gpus 0,

Paired data: To test paired data, the corresponding content and style images (in two folders) should have the same file names. Follow comments in configs/custom_paired.yaml to specify model checkpoints and data paths, then run

python -u main.py --base configs/custom_paired.yaml -n custom_paired -t False --gpus 0,

Citation

@inproceedings{huang2023quantart,
    title={QuantArt: Quantizing Image Style Transfer Towards High Visual Fidelity},
    author={Siyu Huang and Jie An and Donglai Wei and Jiebo Luo and Hanspeter Pfister},
    booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
    month={June},
    year={2023}
}

Acknowledgement

This repository is heavily built upon the amazing VQGAN.

Contact

Siyu Huang (huangsiyutc@gmail.com).