Home

Awesome

arXiv, Porject page, Paper, Video, Slide, Poster

Blind Image Decomposition (BID)

BID task requires separating a superimposed image into constituent underlying images in a blind setting, that is, both the source components involved in mixing as well as the mixing mechanism are unknown.

We invite our community to explore the novel BID task, including discovering interesting areas of application, developing novel methods, extending the BID setting,and constructing benchmark datasets.

Blind Image Decomposition<br> Junlin Han, Weihao Li, Pengfei Fang, Chunyi Sun, Jie Hong, Ali Armin, Lars Petersson, Hongdong Li<br> DATA61-CSIRO and Australian National University<br> European Conference on Computer Vision (ECCV), 2022

BID demo: <img src='imgs/BID.gif' align="left" width=950>

BIDeN (Blind Image Decomposition Network):

<img src='imgs/network.png' align="left" width=950>

Applications of BID

Deraining (rain streak, snow, haze, raindrop): <img src='imgs/taskII.png' align="left" width=1100> <br> Row 1-6 presents 6 cases of a same scene. The 6 cases are (1): rainstreak, (2): rain streak + snow, (3): rain streak + light haze, (4): rain streak + heavy haze, (5): rain streak + moderate haze + raindrop, (6)rain streak + snow + moderate haze + raindrop. <br>

Joint shadow/reflection/watermark removal: <img src='imgs/taskIII.png' align="left" width=950>

Prerequisites

Python 3.7 or above.

For packages, see requirements.txt.

Getting started

git clone https://github.com/JunlinHan/BID.git

BID Datasets

BID Train/Test

Task I: Mixed image decomposition across multiple domains:

Train (biden n, where n is the maximum number of source components):

python train.py --dataroot ./datasets/image_decom --name biden2 --model biden2 --dataset_mode unaligned2
python train.py --dataroot ./datasets/image_decom --name biden3 --model biden3 --dataset_mode unaligned3
...
python train.py --dataroot ./datasets/image_decom --name biden8 --model biden8 --dataset_mode unaligned8

Test a single case (use n = 3 as an example):

Test a single case:
python test.py --dataroot ./datasets/image_decom --name biden3 --model biden3 --dataset_mode unaligned3 --test_input A
python test.py --dataroot ./datasets/image_decom --name biden3 --model biden3 --dataset_mode unaligned3 --test_input AB

... ane other cases. change test_input to the case you want.

Test all cases:

python test2.py --dataroot ./datasets/image_decom --name biden3 --model biden3 --dataset_mode unaligned3

Task II.A : Real-scenario deraining in driving:

Train:

python train.py --dataroot ./datasets/raina --name task2a --model raina --dataset_mode raina

Task II.B : Real-scenario deraining in general:

Train:

python train.py --dataroot ./datasets/rainb --name task2b --model rainb --dataset_mode rainb

Task III: Joint shadow/reflection/watermark removal:

Train:

python train.py --dataroot ./datasets/jointremoval_v1 --name task3_v1 --model jointremoval --dataset_mode jointremoval
or
python train.py --dataroot ./datasets/jointremoval_v2 --name task3_v2 --model jointremoval --dataset_mode jointremoval

The test results will be saved to an html file here: ./results/.

Apply a pre-trained BIDeN model

We provide our pre-trained BIDeN models at: https://drive.google.com/drive/folders/1UBmdKZXYewJVXHT4dRaat4g8xZ61OyDF?usp=sharing

Download the pre-tained model, unzip it and put it inside ./checkpoints.

Example usage: Download the dataset of task II.A (rain in driving) and pretainred model of task II.A. Test the rain streak case.

python test.py --dataroot ./datasets/raina --name task2a --model raina --dataset_mode raina --test_input B 

Evaluation

For FID score, use pytorch-fid.

For PSNR/SSIM/RMSE/NIQE/BRISQUE, see ./metrics/.

Raindrop effect

See ./raindrop/.

Citation

If you use our code or our results, please consider citing our paper. Thanks in advance!

@inproceedings{han2022bid,
  title={Blind Image Decomposition},
  author={Junlin Han and Weihao Li and Pengfei Fang and Chunyi Sun and Jie Hong and Mohammad Ali Armin and Lars Petersson and Hongdong Li},
  booktitle={European Conference on Computer Vision (ECCV)},
  year={2022}
}

Contact

junlin.han@data61.csiro.au or junlinhcv@gmail.com

Acknowledgments

Our code is developed based on DCLGAN and CUT. We thank the auhtors of MPRNet, perceptual-reflection-removal, Double-DIP, Deep-adversarial-decomposition for sharing their source code. We thank exposure-fusion-shadow-removal and ghost-free-shadow-removal for providing the source code and results. We thank pytorch-fid for FID computation.