Home

Awesome

PCA-based knowledge distillation towards lightweight and content-style balanced photorealistic style transfer models

Code for our CVPR 2022 paper PCA-based knowledge distillation towards lightweight and content-style balanced photorealistic style transfer models alt text

Advantages of our distilled models

Models and files

We apply our PCA distillation to two backbones: VGG and MobileNet. The corresponding files can be found in the folders VGG backbone and MobileNet backbone. For each backbone, we provide our trained parameters in the folder ckpts, the demo of how to perform style transfer in the file demo.ipynb, the training code in two files train_eigenbases.py and train_distilled_model.py, and the distilled model structure in utils/lightweight_model.py.

Note that train_eigenbases.py has to be run first to derive the global eignebases and train_distilled_model.py uses the eigenbases to distill models.

Requirements

Citation

If you find this repo useful, please cite our paper PCA-based knowledge distillation towards lightweight and content-style balanced photorealistic style transfer models published in CVPR 2022.

This codebase is largely extended from our previous work PhotoWCT2: Compact Autoencoder for Photorealistic Style Transfer Resulting from Blockwise Training and Skip Connections of High-Frequency Residuals published in WACV2022. Please take a look at it if interested.