Home

Awesome

Key.Net: Keypoint Detection by Handcrafted and Learned CNN Filters

Code for the ICCV19 paper:

"Key.Net: Keypoint Detection by Handcrafted and Learned CNN Filters".
Axel Barroso-Laguna, Edgar Riba, Daniel Ponsa, Krystian Mikolajczyk. ICCV 2019.

[Paper on arxiv]

Update on December 8 2021

We have created a repository with a Key.Net version implemented in PyTorch. Refer to our new repo for more details.

Update on March 20 2020

We have updated the descriptor part. Before, we were using a TensorFlow implementation of the HardNet descriptor, which we switched to the official HardNet model in Pytorch. This change provides better results on the matching step, and thus, all that follows.

Prerequisite

Python 3.7 is required for running Key.Net code. Use Conda to install the dependencies:

conda create --name keyNet_environment tensorflow-gpu=1.13.1
conda activate keyNet_environment 
conda install -c conda-forge opencv tqdm
conda install -c conda-forge scikit-image
conda install pytorch==1.2.0 -c pytorch

Feature Extraction

extract_multiscale_features.py can be used to extract Key.Net features for a given list of images. The list of images must contain the full path to them, if they do not exist, an error will raise.

The script generates two numpy files, one '.kpt' for keypoints, and a '.dsc' for descriptors. The descriptor used together with Key.Net is HardNet. The output format of the keypoints is as follow:

Arguments:

Run the following script to generate the keypoint and descriptor numpy files from the image allocated in test_im directory.

python extract_multiscale_features.py --list-images test_im/image.txt --results-dir test_im/

HSequences Benchmark

We also provide the benchmark to compute HSequences repeatability (single- and multi-scale), and MMA metrics. To do so, first download full images (HSequences) from HPatches repository. Once downloaded, place it on the root directory of the project. We provide a file HSequences_bench/HPatches_images.txt containing the list of images inside HSequences.

Run the next script to compute the features from HSequences:

python extract_multiscale_features.py --list-images HSequences_bench/HPatches_images.txt --results-dir extracted_features

Once all features have been extracted, to compute repeatability and MMA metrics run:

python hsequeces_bench.py --results-dir extracted_features --results-bench-dir HSequences_bench/results --split full

Use arguments to set different options:

Training Key.Net

Before training Key.Net a synthetic dataset must be generated. In our paper, we downloaded ImageNet and used it to generate synthetic pairs of images, however, any other dataset would work if it is big enough. Therefore, the first time you run the train_network.py script, two tfrecord will be generated, one for training and another for validation. This is only done when the code couldn't find them, thus, the next runs of the script will skip this part.

python train_network.py --data-dir /path/to/ImageNet --network-version KeyNet_default

Check the arguments to customize your training, some parameters you might want to change are:

BibTeX

If you use this code in your research, please cite our paper:

@InProceedings{Barroso-Laguna2019ICCV,
    author = {Barroso-Laguna, Axel and Riba, Edgar and Ponsa, Daniel and Mikolajczyk, Krystian},
    title = {{Key.Net: Keypoint Detection by Handcrafted and Learned CNN Filters}},
    booktitle = {Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision},
    year = {2019},
}