Awesome
Bridging the Visual Gap: Wide-Range Image Blending
PyTorch implementaton of our CVPR 2021 paper "Bridging the Visual Gap: Wide-Range Image Blending".
You can visit our project website here.
In this paper, we propose a novel model to tackle the problem of wide-range image blending, which aims to smoothly merge two different images into a panorama by generating novel image content for the intermediate region between them.
<div align=center><img height="230" src="https://github.com/julia0607/Wide-Range-Image-Blending/blob/main/samples/teaser.gif"/></div>Paper
Bridging the Visual Gap: Wide-Range Image Blending
Chia-Ni Lu, Ya-Chu Chang, Wei-Chen Chiu
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021.
Please cite our paper if you find it useful for your research.
@InProceedings{lu2021bridging,
author = {Lu, Chia-Ni and Chang, Ya-Chu and Chiu, Wei-Chen},
title = {Bridging the Visual Gap: Wide-Range Image Blending},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2021}
}
Installation
- This code was developed with Python 3.7.4 & Pytorch 1.0.0 & CUDA 9.2
- Other requirements: numpy, skimage, tensorboardX
- Clone this repo
git clone https://github.com/julia0607/Wide-Range-Image-Blending.git
cd Wide-Range-Image-Blending
Testing
Download our pre-trained model weights from here and put them under weights/
.
Test the sample data provided in this repo:
python test.py
Or download our paired test data from here and put them under data/
.
Then run the testing code:
python test.py --test_data_dir_1 ./data/scenery6000_paired/test/input1/
--test_data_dir_2 ./data/scenery6000_paired/test/input2/
Run your own data:
python test.py --test_data_dir_1 YOUR_DATA_PATH_1
--test_data_dir_2 YOUR_DATA_PATH_2
--save_dir YOUR_SAVE_PATH
If your test data isn't paired already, add --rand_pair True
to randomly pair the data.
Training
We adopt the scenery dataset proposed by Very Long Natural Scenery Image Prediction by Outpainting for conducting our experiments, in which we split the dataset to 5040 training images and 1000 testing images.
Download the dataset with our split of train and test set from here and put them under data/
.
You can unzip the .zip
file with jar xvf scenery6000_split.zip
.
Then run the training code for self-reconstruction stage (first stage):
python train_SR.py
After finishing the training of self-reconstruction stage, move the latest model weights from checkpoints/SR_Stage/
to weights/
(or use our pre-train weights from self-reconstruction stage), and run the training code for fine-tuning stage (second stage):
python train_FT.py --load_pretrain True
Train the model with your own dataset:
python train_SR.py --train_data_dir YOUR_DATA_PATH
After finishing the training of self-reconstruction stage, move the latest model weights to weights/
, and run the training code for fine-tuning stage (second stage):
python train_FT.py --load_pretrain True
--train_data_dir YOUR_DATA_PATH
If your train data isn't paired already, add --rand_pair True
to randomly pair the data in the fine-tuning stage.
TensorBoard Visualization
Visualization on TensorBoard for training and validation is supported. Run tensorboard --logdir YOUR_LOG_DIR
to view training progress.
Acknowledgments
Our code is partially based on Very Long Natural Scenery Image Prediction by Outpainting and a pytorch re-implementation for Generative Image Inpainting with Contextual Attention.
The implementation of ID-MRF loss is borrowed from Image Inpainting via Generative Multi-column Convolutional Neural Networks.