Home

Awesome

Semantic-guided Multi-mask Image Harmonization

Introduction

This is the official code of the paper: Semantic-guided Multi-mask Image Harmonization

Quick Start

Data Preparation

In this paper, we constructs two benchmarks HScene and HLIP, we also conduct expriments on iHarmony4.

The datasets of HScene and HLIP can be download from Google Drive. The composite dataset in /datasets/HScene(HLIP)/test/composite is the test dataset we generated.

Download the datasets and put them under the SgMMH folder.

Train and Test

Our code is based on BasicSR, thanks for its excellent projects.

We provide a training and a test examples: train.sh, test.sh

One quick training command:

CUDA_VISIBLE_DEVICES=0,1,2,3 ./scripts/dist_train.sh 4 \
options/train/OMGAN/train_OM_Mask_HScene.yml

One quick testing command:

CUDA_VISIBLE_DEVICES=0  ./scripts/dist_test.sh 1 \
options/test/OMGAN/test_OM_HScene.yml

One quick evaluation command:

python basicsr/metrics/calculate_lpips.py --path results/test_OM_HScene/visualization/HScene

Pretrained Model

Sg-MMH: Google Drive

Download the model and change the path in each yml files.

We also revise HarmonyTransformer with our operator masks, and provide the pretrained model in Google Drive.

Acknowledgement

For the whole code framework and some of the data modules and model functions used in the source code, we need to acknowledge the repo of BasicSR, DoveNet, pix2pix,HarmonyTransformer.