Awesome
A Toolbox for Image Feature Matching and Evaluations
In this repository, we provide easy interfaces for several exisiting SotA methods to match image feature correspondences between image pairs. We provide scripts to evaluate their predicted correspondences on common benchmarks for the tasks of image matching, homography estimation and visual localization.
TODOs & Updates
- Add LoFTR method (2021-7-8)
- Add simple match visualization (2021-7-8)
- Use immatch as a python lib under develop mode. Check install.md for details. (2021-7-22)
- Add SIFT method (opencv version) (2021-7-25)
- Add script to eval on RobotCar using HLoc (2021-7-31)
- Add Dog-AffNet-Hardnet (Contributed by Dmytro Mishkin đź‘Ź, 2021-8-29)
- Add AUC metric and opencv solver for Homography estimation on HPatches (#20, 2022-1-12)
- Add COTR (A naive wrapper without tuning parameters, 2022-3-29)
- Add Aspanformer (2023-6-2)
- Add Megadepth relative pose estimation following LoFTR & Aspanformer (2023-6-2)
- Add ScanNet relative pose estimation following LoFTR & Aspanformer (2024-1-11)
- Add support to eval on Image Matching Challenge
- Add scripts to eval on SimLoc challenge.
Comments from QJ: Currently I am quite busy with my study & work. So it will take some time before I release the next two TODOs.
Supported Methods & Evaluations
Sparse Keypoint-based Matching:
- Local Feature: CAPS, D2Net, R2D2, SuperPoint, Dog-AffNet-HardNet
- Matcher: SuperGlue
Semi-dense Matching:
- Correspondence Network: NCNet, SparseNCNet,
- Transformer-based: Aspanformer,LoFTR, COTR
- Local Refinement: Patch2Pix
Supported Evaluations :
- Image feature matching on HPatches
- Homography estimation on HPatches
- Visual localization benchmarks:
- InLoc
- Aachen (original + v1.1)
- RobotCar Seasons (v1 + v2)
Repository Overview
The repository is structured as follows:
- configs/: Each method has its own yaml (.yml) file to configure its testing parameters.
- data/: All datasets should be placed under this folder following our instructions described in Data Preparation.
- immatch/: It contains implementations of method wrappers and evaluation interfaces.
- outputs/: All evaluation results are supposed to be saved here. One folder per benchmark.
- pretrained/: It contains the pretrained models of the supported methods.
- third_party/: The real implementation of the supported methods from their original repositories, as git submodules.
- notebooks/: It contains jupyter notebooks that show example codes to quickly access the methods implemented in this repo.
- docs/: It contains separate documentation about installation and evaluation. To keep a clean face of this repo :).
👉Refer to install.md for details about installation.
👉Refer to evaluation.md for details about evaluation on benchmarks.
Example Code for Quick Testing
To use a specific method to perform the matching task, you simply need to do:
- Initialize a matcher using its config file. See examples of config yaml files under configs folder, eg., patch2pix.yml. Each config file contains multiple sections, each section corresponds to one setting. Here, we use the setting (tagged by 'example') for testing on example image pairs.
- Perform matching
import immatch
import yaml
from immatch.utils import plot_matches
# Initialize model
with open('configs/patch2pix.yml', 'r') as f:
args = yaml.load(f, Loader=yaml.FullLoader)['example']
model = immatch.__dict__[args['class']](args)
matcher = lambda im1, im2: model.match_pairs(im1, im2)
# Specify the image pair
im1 = 'third_party/patch2pix/examples/images/pair_2/1.jpg'
im2 = 'third_party/patch2pix/examples/images/pair_2/2.jpg'
# Match and visualize
matches, _, _, _ = matcher(im1, im2)
plot_matches(im1, im2, matches, radius=2, lines=True)
👉 Try out the code using example notebook .
Notice
- This repository is expected to be actively maintained (at least before I graduate🤣🤣) and gradually (slowly) grow for new features of interest.
- Suggestions regarding how to improve this repo, such as adding new SotA image matching methods or new benchmark evaluations, are welcome đź‘Ź.
Regarding Patch2Pix
With this reprository, one can reproduce the tables reported in our paper accepted at CVPR2021: Patch2Pix: Epipolar-Guided Pixel-Level Correspondences[pdf]. Check our patch2pix repository for its training code.
Disclaimer
- All of the supported methods and evaluations are not implemented from scratch by us. Instead, we modularize their original code to define unified interfaces.
- If you are using the results of a method, remember to cite the corresponding paper.
- All credits of the implemetation of those methods belong to their authors .