Home

Awesome

README - KAZE Features

Version: 1.8.0 Date: 11-12-2014

You can get the latest version of the code from github: https://github.com/pablofdezalc/kaze

CHANGELOG

Version: 1.8.0 Changes:

Version: 1.7.0 Changes:

Version: 1.6.0 Changes:

For more information about FED, please check:

  1. Fast Explicit Diffusion for Accelerated Features in Nonlinear Scale Spaces. Pablo F. Alcantarilla, J. Nuevo and Adrien Bartoli. In British Machine Vision Conference (BMVC), Bristol, UK, September 2013.

  2. From box filtering to fast explicit diffusion. S. Grewenig, J. Weickert, and A. Bruhn. In Proceedings of the DAGM Symposium on Pattern Recognition, pages 533–542, 2010.

Version: 1.5.2 Changes:

Version: 1.5.1 Changes:

Version: 1.5 Changes:

Version: 1.4 Changes:

Version: 1.3 Changes:

Version: 1.2 Changes:

Version: 1.1 Changes:

What is this file?

This file explains how to make use of source code for computing KAZE features and two practical image matching applications.

Library Dependencies

The code is mainly based on the OpenCV library using the C++ interface.

In order to compile the code, the following libraries to be installed on your system:

If you want to use OpenMP parallelization you will need to install OpenMP in your system In Linux you can do this by installing the gomp library

You will also need doxygen in case you need to generate the documentation

Tested compilers

Tested systems:

Getting Started

Compiling:

  1. $ mkdir build
  2. $ cd build>
  3. $ cmake ..
  4. $ make

Additionally you can also install the library in /usr/local/kaze/lib by typing: $ sudo make install

If the compilation is successful you should see three executables in the folder bin:

Additionally, the library libKAZE[.a, .lib] will be created in the lib folder.

If there is any error in the compilation, perhaps some libraries are missing. Please check the Library dependencies section.

Examples: To see how the code works, examine the two examples provided.

Documentation

In the working folder type: doxygen

The documentation will be generated in the doc folder.

Computing KAZE Features

For running the program you need to type in the command line the following arguments: ./kaze_features img.jpg [options]

The options are not mandatory. In case you do not specify additional options, default arguments will be used. Here is a description of the additional options:

Important Things:

Image Matching Example with KAZE Features

The code contains one program to perform image matching between two images. If the ground truth transformation is not provided, the program estimates a fundamental matrix using RANSAC between the set of correspondences between the two images.

For running the program you need to type in the command line the following arguments: ./kaze_match img1.jpg img2.pgm homography.txt [options]

The datasets folder contains the Iguazu dataset described in the paper and additional datasets from Mykolajczyk et al. evaluation. The Iguazu dataset was generated by adding Gaussian noise of increasing standard deviation.

For example, with the default configuration parameters used in the current code version you should get the following results:

./kaze_match ../../datasets/iguazu/img1.pgm
              ../../datasets/iguazu/img4.pgm
              ../../datasets/iguazu/H1to4p
Number of Keypoints Image 1: 1825
Number of Keypoints Image 2: 1634
KAZE Features Extraction Time (ms): 992.943
Matching Descriptors Time (ms): 14.2714
Number of Matches: 981
Number of Inliers: 854
Number of Outliers: 127
Inliers Ratio: 87.054

Image Matching Comparison between KAZE, SIFT and SURF (OpenCV)

The code contains one program to perform image matching between two images, showing a comparison between KAZE features, SIFT and SURF. All these implementations are based on the OpenCV library.

The program assumes that the ground truth transformation is provided

For running the program you need to type in the command line the following arguments: ./kaze_compare img1.jpg img2.pgm homography.txt [options]

For example, running kaze_compare with the first and third images from the boat dataset you should get the following results:

./kaze_compare ../../datasets/boat/img1.pgm
               ../../datasets/boat/img3.pgm
               ../../datasets/boat/H1to3p
SIFT Results
**************************************
Number of Keypoints Image 1: 2000
Number of Keypoints Image 2: 2000
Number of Matches: 746
Number of Inliers: 690
Number of Outliers: 56
Inliers Ratio: 92.4933
SIFT Features Extraction Time (ms): 1087.91

SURF Results
**************************************
Number of Keypoints Image 1: 4021
Number of Keypoints Image 2: 3162
Number of Matches: 725
Number of Inliers: 499
Number of Outliers: 226
Inliers Ratio: 68.8276
SURF Features Extraction Time (ms): 133.709

KAZE Results
**************************************
Number of Keypoints Image 1: 4795
Number of Keypoints Image 2: 4061
Number of Matches: 1908
Number of Inliers: 1710
Number of Outliers: 198
Inliers Ratio: 89.6226
KAZE Features Extraction Time (ms): 869.032

One of the interesting reasons why you should use KAZE features is because is open source and you can use that freely even in commercial applications, which is not the case of SIFT and SURF. The code is released under the BSD license. In general, KAZE results are superior to the other OpenCV methods (in terms of number of inliers and ratio), while being more slower to compute. Future work will try to speed-up the process as much as possible while keeping good performance

Citation

If you use this code as part of your work, please cite the following paper:

  1. KAZE Features. Pablo F. Alcantarilla, Adrien Bartoli and Andrew J. Davison. In European Conference on Computer Vision (ECCV), Fiorenze, Italy. October 2012.

Contact Info

Important: If you work in a research institution, university, company or you are a freelance and you are using KAZE or A-KAZE in your work, please send me an email!! I would like to know the people that are using KAZE around the world!!"

In case you have any question, find any bug in the code or want to share some improvements, please contact me:

Pablo F. Alcantarilla email: pablofdezalc@gmail.com