Home

Awesome

Conditional Sequential Modulation for Efficient Global Image Retouching Paper Link

By Jingwen He*, Yihao Liu*, Yu Qiao, and Chao Dong (* indicates equal contribution)

<p align="center"> <img src="figures/csrnet_fig1.png"> </p> <b>Left</b>: Compared with existing state-of-the-art methods, our method achieves superior performance with extremely few parameters (1/13 of HDRNet and 1/250 of White-Box). The diameter of the circle represents the amount of trainable parameters. <b>Right</b>: Image retouching examples. <p align="center"> <img src="figures/csrnet_fig6.png"> </p> The first row shows smooth transition effects between different styles (expert A to B) by image interpolation. In the second row, we use image interpolation to control the retouching strength from input image to the automatic retouched result. We denote the interpolation coefficient α for each image.

BibTex

@article{he2020conditional,
  title={Conditional Sequential Modulation for Efficient Global Image Retouching},
  author={He, Jingwen and Liu, Yihao and Qiao, Yu and Dong, Chao},
  journal={arXiv preprint arXiv:2009.10390},
  year={2020}
}

Dependencies and Installation

Datasets

Here, we provide the preprocessed datasets: MIT-Adobe FiveK dataset, which contains both training pairs and testing pairs.

How to Test

  1. Modify the configuration file options/test/test_Enhance.yml. e.g., dataroot_GT, dataroot_LQ, and pretrain_model_G. (We provide a pretrained model in experiments/pretrain_models/csrnet.pth)
  2. Run command:
python test_CSRNet.py -opt options/test/test_Enhance.yml
  1. Modify the python file calculate_metrics.py: input_path, GT_path (Line 139, 140). Then run:
python calculate_metrics.py

How to Train

  1. Modify the configuration file options/train/train_Enhance.yml. e.g., dataroot_GT, dataroot_LQ.
  2. Run command:
python train.py -opt options/train/train_Enhance.yml

Acknowledgement