Home

Awesome

Spectral Hint GAN

<!-- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/SHI-Labs/FcF-Inpainting/blob/main/colab/FcF_Inpainting.ipynb) [![Huggingface space](https://img.shields.io/badge/šŸ¤—-Huggingface%20Space-cyan.svg)](https://huggingface.co/spaces/shi-lab/FcF-Inpainting) -->

Framework: PyTorch License

This repo hosts the official implementary of:

Xingqian Xu, Shant Navasardyan, Vahram Tadevosyan, Andranik Sargsyan, Yadong Mu and Humphrey Shi, Image Completion with Heterogeneously Filtered Spectral Hints, Paper arXiv Link.

News

Introduction

<p align="center"> <img src="assets/teaser.png" width="99%"> </p>

Spectral Hint GAN (SH-GAN) is an high-performing inpainting network enpowered by CoModGAN and novel spectral processing techniques. SH-GAN reaches state-of-the-art on FFHQ and Places2 with freeform masks.

Network and Algorithm

The overall structure of our SH-GAN shows in the following figure:

<p align="center"> <img src="assets/network.png" width="99%"> </p>

The sturcture of our Spectral Hint Unit shows in the following graph:

<p align="center"> <img src="assets/shu.png" width="40%"> </p>

Heterogeneous Filtering Explaination:

<p align="center"> <img src="assets/hfilter.png" width="80%"> </p>

Gaussian Split Algorithm Explaination:

<p align="center"> <img src="assets/split.png" width="99%"> </p>

Data

We use FFHQ and Places2 as our main dataset. Download these dataset from the following official link: FFHQ, Places2

Directory of FFHQ data for our code:

ā”œā”€ā”€ data
ā”‚Ā Ā  ā””ā”€ā”€ ffhq
ā”‚Ā Ā      ā””ā”€ā”€ ffhq256x256.zip
ā”‚Ā Ā      ā””ā”€ā”€ ffhq512x512.zip

Directory of Places2 data for our code:

ā”œā”€ā”€ data
ā”‚Ā Ā  ā””ā”€ā”€ Places2
ā”‚Ā Ā      ā””ā”€ā”€ data_challenge
ā”‚Ā     Ā      ...
ā”‚Ā Ā      ā””ā”€ā”€ val_large
ā”‚Ā     Ā      ...

Setup

conda create -n shgan python=3.8
conda activate shgan
conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 cudatoolkit=11.1 -c pytorch -c conda-forge
pip install -r requirement.txt

Results and pretrained models

DIMDATAFIDLPIPSPSNRSSIMDownload
CoModGAN256FFHQ4.77550.256816.240.5913
SH-GAN256FFHQ4.34590.254216.370.5911link
CoModGAN512FFHQ3.69960.246918.460.6956
SH-GAN512FFHQ3.41340.244718.430.6936link
CoModGAN256Places29.36210.399014.500.4923
SH-GAN256Places27.50360.394014.580.4958link
CoModGAN512Places27.97350.342016.000.5953
SH-GAN512Places27.02770.338616.030.5973link

Evaluation

Here are the one-line shell commends to evaluation SH-GAN on FFHQ 256/512 and Places2 256/512.

python main.py --experiment shgan_ffhq256_eval --gpu 0 1 2 3 4 5 6 7 --eval 99999
python main.py --experiment shgan_ffhq512_eval --gpu 0 1 2 3 4 5 6 7 --eval 99999
python main.py --experiment shgan_places256_eval --gpu 0 1 2 3 4 5 6 7 --eval 99999
python main.py --experiment shgan_places512_eval --gpu 0 1 2 3 4 5 6 7 --eval 99999

Also you need to:

Some simple things to do to resolve the issues:

Training

coming soon

Citation

@inproceedings{xu2023image,
  title={Image Completion with Heterogeneously Filtered Spectral Hints},
  author={Xu, Xingqian and Navasardyan, Shant and Tadevosyan, Vahram and Sargsyan, Andranik and Mu, Yadong and Shi, Humphrey},
  booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
  pages={4591--4601},
  year={2023}
}

Acknowledgement

Part of the codes reorganizes/reimplements code from the following repositories: Comodgan official Github and Stylegan2-ADA official Github.