Home

Awesome

We have built our source code on top of 3 repositories:

We have used the synthetic mixed-illuminant evaluation set proposed by Afifi et al. [2]

@inproceedings{afifi2020deepWB,
  title={Deep White-Balance Editing},
  author={Afifi, Mahmoud and Brown, Michael S},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  year={2020}
}

@inproceedings{afifi2022awb,
  title={Auto White-Balance Correction for Mixed-Illuminant Scenes},
  author={Afifi, Mahmoud and Brubaker, Marcus A. and Brown, Michael S.},
  booktitle={IEEE Winter Conference on Applications of Computer Vision (WACV)},
  year={2022}
}

@InProceedings{kinli2021ifrnet,
    author={Kinli, Furkan and Ozcan, Baris and Kirac, Furkan},
    title={Instagram Filter Removal on Fashionable Images},
    booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
    year={2021}
}

You can download the synthetic test set (jpg images) from the following links:

To download our trained models:

python3 models/download.py 

Please do not forget to change -ted (--testing-dir) parameters in bash script with the folder you download the dataset, if you want to simulate. To simulate the evaluation on the synthetic mixed-illuminant evaluation set -after downloading the dataset and pre-trained weights-:

patch size: 64 & white-balance settings: D S T:

./test_synthetic_64_dst.sh

patch size: 128 & white-balance settings: D S T:

./test_synthetic_128_dst.sh

Visual outputs can be seen in the folder "results/ifrnet/images/synthetic" after simulating the evaluation.