Home

Awesome

<!-- * @Author: EasonZhang * @Date: 2024-07-26 15:03:49 * @LastEditors: EasonZhang * @LastEditTime: 2024-11-03 15:49:55 * @FilePath: /A2PM-MESA/README.md * @Description: Readme * * Copyright (c) 2024 by EasonZhang, All Rights Reserved. --> <!-- add a image before title --> <div align="center"> <img src="assets/mesa-ava.png" style="height:100px"></img> <h1>Area to Point Matching Framework </h1> <div style="display: flex; justify-content: center; gap: 10px;"> <a href='https://arxiv.org/abs/2408.00279'><img src='https://img.shields.io/badge/arXiv-2409.02048-b31b1b.svg'></a> <a href='https://cvl.sjtu.edu.cn/getpaper/1103'><img src='https://img.shields.io/badge/Project-Page-Green'></a> <a href='https://www.bilibili.com/video/BV19BsFe5E6U/?spm_id_from=333.1365.list.card_archive.click&vd_source=a8ebbc42d41f0658cfa31f10414ec697'><img src='https://img.shields.io/badge/Bilibili-Video-ff69b4'></a> </div> </div>

The family of Area to Point Matching, which is good at handling matching challenges like large/various resolution images, large scale/viewpoint changes and etc.

A2PM

This is a user-friendly implementation of Area to Point Matching (A2PM) framework, powered by hydra.

It contains the implementation of SGAM (arXiv'23), a training-free version of MESA (CVPR'24, project page) and DMESA (arXiv'24).

Due to the power of hydra, the implementation is highly configurable and easy to extend.

It supports the implementation of feature matching approaches adopting the A2PM framework, and also enables the combination of new point matching and area matching methods.

Qualitative Results of MESA and DMESA

Qua


Table of Contents


News and TODOs


Installation

To begin with, you need to install the dependencies following the instructions below.

Clone the Repository

git clone --recursive https://github.com/Easonyesheng/A2PM-MESA
# or if you have already cloned the repository
# git submodule update --init --recursive
cd A2PM-MESA

Environment Creation

conda create -n A2PM python==3.8
conda activate A2PM

Basic Dependencies


Usage: hydra-based Configuration

This code is based on hydra, which is a powerful configuration system for Python applications. The documentation of hydra can be found here.

In the following, we will introduce how to use the code by describing its components with hydra configurations.

Dataset

We offer dataloaders for two widely-used datasets, including ScanNet1500 and MegaDepth1500.

Segmentation Preprocessing

The segmentation results are needed for the area matching methods.

Usage

SAM2

Area Matching

Area matching is to establish semantic area matches between two images for matching reduandancy reduction, which is the core of the A2PM framework.

Point Matching

Point matching is to establish point matches between two (area) images.

Match Fusion (Geometry Area Matching)

We fuse matches from multiple inside-area point matching by the geometry area matching module.

A2PM

The A2PM framework will combine the above components to form a complete feature matching pipeline.

DEMO

Evaluation


Benchmark Test

You can run the benchmark test by running the shell script such as:

./scripts/dmesa-dkm-md.sh # DMESA+DKM on MegaDepth1500

You can change the configurations in the shell script to test different methods, i.e. +experiment=xxx.

Expected Results of provided scripts

Take DKM as an example, the expected results are as follows:

SN1500($640\times480$)DKMMESA-free+DKMDMESA+DKM
Pose AUC@530.2631.6430.96
Pose AUC@1051.5152.8052.41
Pose AUC@2069.4370.0869.74
MD1500($832\times832$)DKMMESA-free+DKMDMESA+DKM
Pose AUC@563.6163.8565.65
Pose AUC@1076.7577.3878.46
Pose AUC@2085.7286.4786.97

Citation

If you find this work useful, please consider citing:

@article{SGAM,
  title={Searching from Area to Point: A Hierarchical Framework for Semantic-Geometric Combined Feature Matching},
  author={Zhang, Yesheng and Zhao, Xu and Qian, Dahong},
  journal={arXiv preprint arXiv:2305.00194},
  year={2023}
}
@InProceedings{MESA,
    author    = {Zhang, Yesheng and Zhao, Xu},
    title     = {MESA: Matching Everything by Segmenting Anything},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2024},
    pages     = {20217-20226}
}
@misc{DMESA,
    title={DMESA: Densely Matching Everything by Segmenting Anything},
    author={Yesheng Zhang and Xu Zhao},
    year={2024},
    eprint={2408.00279},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}

Acknowledgement

We thank the authors of the following repositories for their great works: