Home

Awesome

SIHR: a MATLAB/GNU Octave toolbox for single image highlight removal

DOI DOI license

Citation

@article{Ramos2020,
  doi = {10.21105/joss.01822},
  url = {https://doi.org/10.21105/joss.01822},
  year = {2020},
  month = jan,
  publisher = {The Open Journal},
  volume = {5},
  number = {45},
  pages = {1822},
  author = {V{\'{\i}}tor Ramos},
  title = {{SIHR}: a {MATLAB}/{GNU} {Octave} toolbox for single image highlight removal},
  journal = {Journal of Open Source Software}
}

Summary

An ongoing effort of developing new and implementing established single image highlight removal (SIHR) methods on MATLAB/GNU Octave.

Highlight, specularity, or specular reflection removal (see <sup>1</sup> for a proper Web of Science expression, see [1] for a reference work entry, see [2], [3] for survey on this problem) concerns the following decomposition.

Dichromatic Reflection Model

I welcome and encourage contributions to this project upon review. Please check CONTRIBUTING.md for more details.

Disclaimer 1: this repository is intended for research purposes only.
Disclaimer 2: some most of these methods are based on chromaticity analysis, they fail miserably for grayscale images.

<sup>1</sup> ((remov* NEAR/1 (highlight* OR specular*)) OR (separat* NEAR/1 (reflect* OR specular*)))

Raison d'être

I started out this repository by implementing, translating and collecting code snippets from the rare available<sup>2,3,4,5</sup> codes. Oftentimes papers are cryptical, codes are in C/C++ (requires compilation and major source code modification for general testing), or are just unavailable. See, e.g. this CSDN post<sup>6</sup> that has no valid links at all.

In this context, this repository aims to be a continous algorithmic aid for ongoing research and development of SIHR methods.

<sup>2</sup> Tan and Ikeuchi. [Online]. Available: http://tanrobby.github.io/code/highlight.zip
<sup>3</sup> Shen et al. [Online]. Available: http://ivlab.org/publications/PR2008_code.zip
<sup>4</sup> Yang et al. [Online]. Available: http://www6.cityu.edu.hk/stfprofile/qiyang.htm
<sup>5</sup> Shen and Zheng. [Online]. Available: http://ivlab.org/publications/AO2013_code.zip
<sup>6</sup> https://blog.csdn.net/nvidiacuda/article/details/8078167

Usage (API)

Calling this toolbox's functions is very straightforward:

I_d = AuthorYEAR(I); % I is a double-valued input image of dimension
                     % m×n×3 containing linear RGB values and
                     % I_d is the calculated diffuse reflection
                     % using AuthorYEAR method.
                     % The specular component is simply
                     % I_s = I - I_d;

Methods

The following methods are available.

YearMethodFunction
2005Tan and Ikeuchi [4]Tan2005
2006Yoon et al. [5]Yoon2006
2008Shen et al. [6]Shen2008
2009Shen and Cai [7]Shen2009
2010Yang et al. [8]Yang2010
2013Shen and Zheng [9]Shen2013
2016Akashi and Okatani [10]Akashi2016

The following improvement is available.

YearMethodFunction
2019Yamamoto and Nakazawa [11]Yamamoto2019

Environment

The environment this repository is being developed is:

Tested environments

Octave 4.2                                Ubuntu 18.04
Octave 5.1 (latest)    Windows 10 1903
MATLAB 9.1 (R2016b)    Windows 10 1903
MATLAB 9.6 (R2019a)    Windows 10 1903    Ubuntu 16.04 (MATLAB Online)

Installation

  1. git clone https://github.com/vitorsr/SIHR.git or download a copy of the repository.
  2. Start Octave or MATLAB.
    1. cd('path/to/SIHR'), i.e. change current folder to SIHR root (where SIHR.m is located).
    2. run SIHR.m for session path setup.
    3. help SIHR or doc SIHR provides a summary of the methods available.

Additional Debian/Ubuntu installation

To install the image package from Octave Forge, build-essential and liboctave-dev need to be present. Install them via apt, then proceed with package installation.

sudo apt-get install -qq -y build-essential liboctave-dev
octave --eval "pkg install -forge image"

Performance

This section aims to clarify how well (or not) the methods reproduced in this project were at reproducing results in literature.

Note: Akashi and Okatani's [10] method has highly fluctuating results because of random initialization.

Dataset

In technical literature, there exist two ground truth datasets commonly used right now. One by Shen and Zheng [9] which is distributed alongside their code, and one by Grosse et al. [12] in a dedicated page<sup>7</sup>.

Other test images are included alongside the code for Shen et al. [6] and Yang et al. [8].

Follow the instructions in images in order to download a local copy of these images from the respective authors' pages.

<sup>7</sup> Grosse et al. [Online]. Available: http://www.cs.toronto.edu/~rgrosse/intrinsic/

Quality

Quantitative results reported are usually regarding the quality of the recovered diffuse component with respect to the ground truth available in the Shen and Zheng [9] test image set.

Automated testing

Reproduced results below are available in the utils/automated_testing.m script.

Note: ssim is not available in Octave Forge image.

Highest (self and peer-reported | reproduced) PSNR results (in dB)

YearMethodanimalscupsfruitmasksReproducedanimalscupsfruitmasks
2005Tan and Ikeuchi30.230.129.625.6Tan200530.431.630.425.8
2006Yoon et al.----Yoon200632.933.336.634.1
2008Shen et al.34.637.737.631.7Shen200834.237.538.032.1
2009Shen and Cai34.837.636.934.0Shen200934.937.636.734.0
2010Yang et al.37.238.037.632.2Yang201036.537.536.233.5
2013Shen and Zheng37.339.338.934.1Shen201337.538.338.232.7
2015Liu et al.33.437.635.134.5-----
2016Akashi and Okatani26.835.730.832.3Akashi201632.735.934.834.0
2016Suo et al.--40.434.2-----
2017Ren et al.-38.037.734.5-----
2018Guo et al.35.739.136.434.4-----

Highest (self and peer-reported | reproduced) SSIM results

YearMethodanimalscupsfruitmasksReproducedanimalscupsfruitmasks
2005Tan and Ikeuchi0.9290.7670.9120.789Tan20050.9280.8950.9070.821
2006Yoon et al.----Yoon20060.9800.9610.9610.953
2008Shen et al.0.9740.9620.9610.943Shen20080.9750.9620.9610.943
2009Shen and Cai----Shen20090.9850.9700.9620.961
2010Yang et al.0.9700.9410.9390.899Yang20100.9520.9370.9160.896
2013Shen and Zheng0.9710.9660.9600.941Shen20130.9850.9640.9580.935
2015Liu et al.---------
2016Akashi and Okatani0.8020.9370.7650.657Akashi20160.73400.91900.90100.8710
2016Suo et al.---------
2017Ren et al.0.8960.9570.9520.913-----
2018Guo et al.0.9750.9630.9300.955-----

Expected running time (in seconds)

Note: results for MATLAB R2019b, Intel i5-8250U CPU, and 24 GB DDR4 2400 MHz RAM.

YearReproducedanimalscupsfruitmasks
2005Tan200567.0170.0210.0190.0
2006Yoon20062.81.62.63.4
2008Shen20081.97.84.34.9
2009Shen20090.90.050.0410.029
2010Yang20100.310.130.110.081
2013Shen20130.0430.0710.0830.056
2016Akashi2016140.0170.0230.0200.0

References

<small>
  1. R. T. Tan, “Specularity, Specular Reflectance,” in Computer Vision, Springer US, 2014, pp. 750–752 [Online]. Available: http://dx.doi.org/10.1007/978-0-387-31439-6_538
  2. A. Artusi, F. Banterle, and D. Chetverikov, “A Survey of Specularity Removal Methods,” Computer Graphics Forum, vol. 30, no. 8, pp. 2208–2230, Aug. 2011 [Online]. Available: http://dx.doi.org/10.1111/J.1467-8659.2011.01971.X
  3. H. A. Khan, J.-B. Thomas, and J. Y. Hardeberg, “Analytical Survey of Highlight Detection in Color and Spectral Images,” in Lecture Notes in Computer Science, Springer International Publishing, 2017, pp. 197–208 [Online]. Available: http://dx.doi.org/10.1007/978-3-319-56010-6_17
  4. R. T. Tan and K. Ikeuchi, “Separating reflection components of textured surfaces using a single image,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 2, pp. 178–193, Feb. 2005 [Online]. Available: http://dx.doi.org/10.1109/TPAMI.2005.36
  5. K. Yoon, Y. Choi, and I. S. Kweon, “Fast Separation of Reflection Components using a Specularity-Invariant Image Representation,” in 2006 International Conference on Image Processing, 2006 [Online]. Available: http://dx.doi.org/10.1109/ICIP.2006.312650
  6. H.-L. Shen, H.-G. Zhang, S.-J. Shao, and J. H. Xin, “Chromaticity-based separation of reflection components in a single image,” Pattern Recognition, vol. 41, no. 8, pp. 2461–2469, Aug. 2008 [Online]. Available: http://dx.doi.org/10.1016/J.PATCOG.2008.01.026
  7. H.-L. Shen and Q.-Y. Cai, “Simple and efficient method for specularity removal in an image,” Applied Optics, vol. 48, no. 14, p. 2711, May 2009 [Online]. Available: http://dx.doi.org/10.1364/AO.48.002711
  8. Q. Yang, S. Wang, and N. Ahuja, “Real-Time Specular Highlight Removal Using Bilateral Filtering,” in Computer Vision – ECCV 2010, Springer Berlin Heidelberg, 2010, pp. 87–100 [Online]. Available: http://dx.doi.org/10.1007/978-3-642-15561-1_7
  9. H.-L. Shen and Z.-H. Zheng, “Real-time highlight removal using intensity ratio,” Applied Optics, vol. 52, no. 19, p. 4483, Jun. 2013 [Online]. Available: http://dx.doi.org/10.1364/AO.52.004483
  10. Y. Akashi and T. Okatani, “Separation of reflection components by sparse non-negative matrix factorization,” Computer Vision and Image Understanding, vol. 146, pp. 77–85, May 2016 [Online]. Available: http://dx.doi.org/10.1016/j.cviu.2015.09.001
  11. T. Yamamoto and A. Nakazawa, “General Improvement Method of Specular Component Separation Using High-Emphasis Filter and Similarity Function,” ITE Transactions on Media Technology and Applications, vol. 7, no. 2, pp. 92–102, 2019 [Online]. Available: http://dx.doi.org/10.3169/mta.7.92
  12. R. Grosse, M. K. Johnson, E. H. Adelson, and W. T. Freeman, “Ground truth dataset and baseline evaluations for intrinsic image algorithms,” in 2009 IEEE 12th International Conference on Computer Vision, 2009 [Online]. Available: http://dx.doi.org/10.1109/ICCV.2009.5459428
</small>