Home

Awesome

FC<sup>4</sup>:<br> Fully Convolutional Color Constancy with Confidence-weighted Pooling (CVPR 2017)

[Paper]

Yuanming Hu<sup>1,2</sup>, Baoyuan Wang<sup>1</sup>, Stephen Lin<sup>1</sup>

<sup>1</sup>Microsoft Research <sup>2</sup>Tsinghua University (now MIT CSAIL)

Change log:

The Problem, the Challenge, and Our Solution

<img src="web/images/teaser.jpg" width="500">

Visual Results (More)

<img src="web/images/fig6.jpg">

FAQ

Color Constancy and Datasets

a) Links to datasets

(The following two sub-questions are FAQs before I release the code - now the script will take care of these details and you don't need to worry unless out of curiosity.)

b) The input images look purely black. What's happening?

The input photos from the ColorChecker dataset are 16-bit png files and some image viewer may not support them, as pngs are typically 8-bit. Also, since these photos are linear (RAW sensor activations) and modern displays have a 2.2 gamma value (instead of linear gamma), they will appear even darker when displayed. An exposure correction is also necessary.

c) I corrected the gamma. Now most images appear green. Is there anything wrong?

It's common that RAW images appear green. One possible cause is that the color filters of digital cameras may have a stronger activation on the green channel.

d) What can be done to improve the datasets?

Finally, The Cube dataset can be useful for future research!

FC<sup>4</sup> Training and Testing

a) Installation

Please use python2 for now. All dependencies can be installed via pip:

sudo python2 -m pip install opencv-python tensorflow-gpu scipy

b) Data Pre-processing

Shi's Re-processing of Gehler's Raw Dataset:

c) Model Training

d) Visualize Confidence Maps You can look at how the confidence map evolves at the folders models/fc4/example/testXXXXsummaries_0.500000.

e) Pretrained models?

To get the pretrained models on the ColorChecker dataset, please download Pretrained models on the ColorChecker Dataset, and put the nine files in folder pretrained.

f) How to reproduce the results reported in the paper?

python2 fc4.py test pretrained/colorchecker_fold1and2.ckpt -1 g0 fold0
python2 fc4.py test pretrained/colorchecker_fold2and0.ckpt -1 g1 fold1
python2 fc4.py test pretrained/colorchecker_fold0and1.ckpt -1 g2 fold2
   python2 combine.py outputs/fold0_err.pkl outputs/fold1_err.pkl outputs/fold2_err.pkl
25: 0.384, med: 1.160 tri: 1.237 avg: 1.634 75: 3.760 95: 4.850
MeanMedianTri. MeanBest 25%Worst 25%95% Quant.
SqueezeNet-FC4 (CVPR 2017 paper)1.651.181.270.383.784.73
SqueezeNet-FC4 (Open source code)1.631.161.240.383.764.85

You can see we get slightly better results except for 95% Quant.. The difference should be due to randomness (or different TensorFlow version etc.).

g) How to make inference on images based on a trained model?

python2 fc4.py test pretrained/colorchecker_fold1and2.ckpt -1 sample_inputs/a.png

The corrected image will be in the cc_outputs folder.

You will see the results in seconds. Legend (TODO: this legend doesn't match the latest code!): <img src="web/images/legend.jpg" width="900">

h) What does the SEPARATE_CONFIDENCE option mean? When its value is False, does it mean confidence-weighted pooling is disabled?

Firstly, let's clarify a common misunderstanding of the color constancy problem: the output of a color constancy consists of three components. Actually, there are only two components (degrees-of-freedom). In some paper, the two components are denoted as u/v or temperature/tint. When estimating R/G/B, there should be a constraint on the values, either L1 (R+G+B=1) or L2 (R^2+G^2+B^2=1).

In our paper, we estimate R/G/B. Therefore, for each patch, we should either normalize the R/G/B output and estimate another confidence value (which is mathematically more explicit), or directly use the unnormalized estimation as normalized R/G/B times confidence, as mentioned in paper section 4.1. Either way is fine and confidence-weighting is used because one extra degree of freedom (i.e. confidence) is allowed. If you use SEPARATE_CONFIDENCE=True, the former is used; otherwise the latter is used.

If you want to disable confidence-weighted pooling, the correct way is setting WEIGHTED_POOLING=False.

i) How to merge test results on three folds?

python2 combine.py [fold0_model_name] [fold1_model_name] [fold2_model_name]

Bibtex

@inproceedings{hu2017fc,
  title={FC 4: Fully Convolutional Color Constancy with Confidence-weighted Pooling},
  author={Hu, Yuanming and Wang, Baoyuan and Lin, Stephen},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  pages={4085--4094},
  year={2017}
}

Related Research Projects and Implementations

Color Constancy Resources

Acknowledgments