Home

Awesome

Spatially-Correlative Loss

arXiv | website <br>

<img src='imgs/FSeSim-frame.gif' align="center"> <br>

We provide the Pytorch implementation of "The Spatially-Correlative Loss for Various Image Translation Tasks". Based on the inherent self-similarity of object, we propose a new structure-preserving loss for one-sided unsupervised I2I network. The new loss will deal only with spatial relationship of repeated signal, regardless of their original absolute value.

The Spatially-Correlative Loss for Various Image Translation Tasks <br> Chuanxia Zheng, Tat-Jen Cham, Jianfei Cai <br> NTU and Monash University <br> In CVPR2021 <br>

ToDo

Example Results

Unpaired Image-to-Image Translation

<img src='imgs/unpairedI2I-translation.gif' align="center">

Single Image Translation

<img src='imgs/single-translation.gif' align="center">

More results on project page

Getting Started

Installation

This code was tested with Pytorch 1.7.0, CUDA 10.2, and Python 3.7

pip install visdom dominate
git clone https://github.com/lyndonzheng/F-LSeSim
cd F-LSeSim

Datasets

Please refer to the original CUT and CycleGAN to download datasets and learn how to create your own datasets.

Training

sh ./scripts/train_sc.sh 
sh ./scripts/train_sinsc.sh 

As the multi-modal I2I translation model was trained on MUNIT, we would not plan to merge the code to this repository. If you wish to obtain multi-modal results, please contact us at chuanxia001@e.ntu.edu.sg.

Testing

sh ./scripts/test_sc.sh
sh ./scripts/test_sinsc.sh
sh ./scripts/test_fid.sh

Pretrained Models

Download the pre-trained models (will be released soon) using the following links and put them undercheckpoints/ directory.

Citation

@inproceedings{zheng2021spatiallycorrelative,
  title={The Spatially-Correlative Loss for Various Image Translation Tasks},
  author={Zheng, Chuanxia and Cham, Tat-Jen and Cai, Jianfei},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  year={2021}
}

Acknowledge

Our code is developed based on CUT and CycleGAN. We also thank pytorch-fid for FID computation, LPIPS for diversity score, and D&C for density and coverage evaluation.