Home

Awesome

Portrait Photo Retouching with PPR10K

Paper | Supplementary Material | Poster

PPR10K: A Large-Scale Portrait Photo Retouching Dataset with Human-Region Mask and Group-Level Consistency <br> Jie Liang*, Hui Zeng*, Miaomiao Cui, Xuansong Xie and Lei Zhang. <br> In CVPR 2021.

The proposed Portrait Photo Retouching dataset (PPR10K) is a large-scale and diverse dataset that contains: <br>

Samples

sample_images

Two example groups of photos from the PPR10K dataset. Top: the raw photos; Bottom: the retouched results from expert-a and the human-region masks. The raw photos exhibit poor visual quality and large variance in subject views, background contexts, lighting conditions and camera settings. In contrast, the retouched results demonstrate both good visual quality (with human-region priority) and group-level consistency.

This dataset is first of its kind to consider the two special and practical requirements of portrait photo retouching task, i.e., Human-Region Priority and Group-Level Consistency. Three main challenges are expected to be tackled in the follow-up researches: <br>

Agreement

Overview

All data is hosted on GoogleDrive, OneDrive and 百度网盘 (验证码: mrwn):

PathSizeFilesFormatDescription
PPR10K-dataset406 GB176,072Main folder
├  raw313 GB11,161RAWAll photos in raw format (.CR2, .NEF, .ARW, etc)
├  xmp_source130 MB11,161XMPDefault meta-file of the raw photos in CameraRaw, used in our data augmentation
├  xmp_target_a130 MB11,161XMPCameraRaw meta-file of the raw photos recoding the full adjustments by expert a
├  xmp_target_b130 MB11,161XMPCameraRaw meta-file of the raw photos recoding the full adjustments by expert b
├  xmp_target_c130 MB11,161XMPCameraRaw meta-file of the raw photos recoding the full adjustments by expert c
├  masks_full697 MB11,161PNGFull-resolution human-region masks in binary format
├  masks_360p56 MB11,161PNG360p human-region masks for fast training and validation
├  train_val_images_tif_360p91 GB97894TIF360p Source (16 bit tiff, with 5 versions of augmented images) and target (8 bit tiff) images for fast training and validation
├  pretrained_models268 MB12PTHpretrained models for all 3 versions
└  hists624KB39PNGOverall statistics of the dataset

One can directly use the 360p (of 540x360 or 360x540 resolution in sRGB color space) training and validation files (photos, 5 versions of augmented photos and the corresponding human-region masks) we have provided following the settings in our paper (train with the first 8,875 files and validate with the last 2286 files). <br> Also, see the instructions to customize your data (e.g., augment the training samples regarding illuminations and colors, get photos with higher or full resolutions).

Training and Validating the PPR using 3DLUT

Installation

git clone https://github.com/csjliang/PPR10K
cd PPR10K/code_3DLUT/
pip install -r requirements.txt
cd trilinear_cpp
sh trilinear_cpp/setup.sh

Training

python train.py --data_path [path_to_dataset] --gpu_id [gpu_id] --use_mask False --output_dir [path_to_save_models]
python train.py --data_path [path_to_dataset] --gpu_id [gpu_id] --use_mask True --output_dir [path_to_save_models]
python train_GLC.py --data_path [path_to_dataset] --gpu_id [gpu_id] --use_mask False --output_dir [path_to_save_models]
python train_GLC.py --data_path [path_to_dataset] --gpu_id [gpu_id] --use_mask True --output_dir [path_to_save_models]

Evaluation

python validation.py --data_path [path_to_dataset] --gpu_id [gpu_id] --model_dir [path_to_models]
calculate_metrics(source_dir, target_dir, mask_dir)

Pretrained Models

mv your/path/to/pretrained_models/* saved_models/
python validation.py --data_path [path_to_dataset] --gpu_id [gpu_id] --model_dir mask_noglc_a --epoch -1
python train.py --data_path [path_to_dataset] --gpu_id [gpu_id] --use_mask True --output_dir mask_noglc_a --epoch -1

License

This project is released under the Apache 2.0 license.

Citation

If you use this dataset or code for your research, please cite our paper.

@inproceedings{jie2021PPR10K,
  title={PPR10K: A Large-Scale Portrait Photo Retouching Dataset with Human-Region Mask and Group-Level Consistency},
  author={Liang, Jie and Zeng, Hui and Cui, Miaomiao and Xie, Xuansong and Zhang, Lei},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  year={2021}
}

Related Projects

3D LUT

Contact

Should you have any questions, please contact me via liang27jie@gmail.com.