Home

Awesome

Self-Supervised Equivariant Learning for Oriented Keypoint Detection (CVPR 2022)

This is the official implementation of the CVPR 2022 paper "Self-Supervised Equivariant Learning for Oriented Keypoint Detection" by Jongmin Lee, Byungjin Kim, and Minsu Cho.

<p float="left"> <img src="./assets/architecture.png" width="59%" /> <img src="./assets/loss.png" width="38.3%" /> </p>

Detecting robust keypoints from an image is an integral part of many computer vision problems, and the characteristic orientation and scale of keypoints play an important role for keypoint description and matching. Existing learning-based methods for keypoint detection rely on standard translation-equivariant CNNs but often fail to detect reliable keypoints against geometric variations. To learn to detect robust oriented keypoints, we introduce a self-supervised learning framework using rotation-equivariant CNNs. We propose a dense orientation alignment loss by an image pair generated by synthetic transformations for training a histogram-based orientation map. Our method outperforms the previous methods on an image matching benchmark and a camera pose estimation benchmark.

Rotation-equivariant Keypoint Detection

PyTorch source code for CVPR2022 paper.

"Self-Supervised Equivariant Learning for Oriented Keypoint Detection".
Jongmin Lee, Byungjin Kim, Minsu Cho. CVPR 2022.

[Paper] [Project page]

Installation

Clone the Git repository

git clone https://github.com/bluedream1121/ReKD.git

Install dependency

Run the script to install all the dependencies. You need to provide the conda install path (e.g. ~/anaconda3) and the name for the created conda environment.

bash install.sh [conda_install_path] rekd

Requirements

Dataset preparation

Training data

Evaluation data

Synthetic data generation

python train.py --data_dir [ImageNet_directory] --synth_dir datasets/synth_data --patch_size 192 --max_angle 180

Training

python train.py --synth_dir datasets/synth_data --group_size 36 --batch_size 16 --ori_loss_balance 100

Test on the HPatches

You can download the pretrained weights [best models] (password : rekd)

python eval_with_extract.py --load_dir [Trained_models] --eval_split full

<details> <summary><b>HPatches all variations</b></summary>

Results on HPatches all variations. `*' denotes the results with outlier filtering using the orientation. We use HardNet descriptor for evaluation.

ModelRepeatabilityMMA@3MMA@5pred. match.LinksNotes
CVPR202257.673.179.6505.8-CVPR2022 results
CVPR2022*57.676.782.3440.1-CVPR2022 results
REKD_release58.473.580.1511.6modelOfficial retrained model
REKD_release*58.477.182.9444.4modelOfficial retrained model
python eval_with_extract.py --load_dir trained_models/release_group36_f2_s2_t2.log/best_model.pt --eval_split full
</details> <details> <summary><b>HPatches viewpoint variations</b></summary>

Results on HPatches viewpoint variations. `*' denotes the results with outlier filtering using the orientation. We use HardNet descriptor for evaluation.

ModelRepeatabilityMMA@3MMA@5pred. match.Notes
REKD_release59.172.578.7464.9Official retrained model
REKD_release*59.175.781.1399.8Official retrained model
</details> <details> <summary><b>HPatches illumination variations</b></summary>

Results on HPatches illumination variations. `*' denotes the results with outlier filtering using the orientation. We use HardNet descriptor for evaluation.

ModelRepeatabilityMMA@3MMA@5pred. match.Notes
REKD_release57.674.481.5559.9Official retrained model
REKD_release*57.678.584.7490.6Official retrained model
</details>

Citation

If you find our code or paper useful to your research work, please consider citing our work using the following bibtex:

@inproceedings{lee2022self,
  title={Self-Supervised Equivariant Learning for Oriented Keypoint Detection},
  author={Lee, Jongmin and Kim, Byungjin and Cho, Minsu},
  booktitle={2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  pages={4837--4847},
  year={2022},
  organization={IEEE}
}

Reference

Contact

Questions can be left as issues in the repository.