Home

Awesome

Neural Outlier Rejection for Self-Supervised Keypoint Learning

Overview

[Full paper]

Setting up your environment

You need a machine with recent Nvidia drivers and a GPU. We recommend using docker (see nvidia-docker2 instructions) to have a reproducible environment. To setup your environment, type in a terminal (only tested in Ubuntu 18.04 and with Pytorch 1.6):

git clone https://github.com/TRI-ML/KP2D.git
cd KP2D
# if you want to use docker (recommended)
make docker-build

We will list below all commands as if run directly inside our container. To run any of the commands in a container, you can either start the container in interactive mode with make docker-start to land in a shell where you can type those commands, or you can do it in one step:

# single GPU
make docker-run COMMAND="some-command"
# multi-GPU
make docker-run-mpi COMMAND="some-command"

If you want to use features related to Weights & Biases (WANDB) (for experiment management/visualization), then you should create associated accounts and configure your shell with the following environment variables:

export WANDB_ENTITY="something" export WANDB_API_KEY="something" To enable WANDB logging and AWS checkpoint syncing, you can then set the corresponding configuration parameters in configs/<your config>.yaml (cf. configs/base_config.py for defaults and docs):

wandb:
    dry_run: True                                 # Wandb dry-run (not logging)
    name: ''                                      # Wandb run name
    project: os.environ.get("WANDB_PROJECT", "")  # Wandb project
    entity: os.environ.get("WANDB_ENTITY", "")    # Wandb entity
    tags: []                                      # Wandb tags
    dir: ''                                       # Wandb save folder

Data

Download the HPatches dataset for evaluation:

cd /data/datasets/kp2d/
wget http://icvl.ee.ic.ac.uk/vbalnt/hpatches/hpatches-sequences-release.tar.gz
tar -xvf hpatches-sequences-release.tar.gz
mv hpatches-sequences-release HPatches

Download the COCO dataset for training:

mkdir -p /data/datasets/kp2d/coco/ && cd /data/datasets/kp2d/coco/
wget http://images.cocodataset.org/zips/train2017.zip
unzip train2017.zip

Training

To train a model run:

make docker-run COMMAND="python scripts/train_keypoint_net.py kp2d/configs/v4.yaml"

To train on multiple GPUs, simply replace docker-run with docker-run-mpi. Note that we provide the v0-v4.yaml config files, one for each version of our model as presented in the ablative analysis of our paper. For evaluating the pre-trained models corresponding to each config file please see hte following section.

Pre-trained models:

Download the pre-trained models from here and place them in /data/models/kp2d/

To evaluate any of the models, simply run:

make docker-run COMMAND="python scripts/eval_keypoint_net.py --pretrained_model /data/models/kp2d/v4.ckpt --input /data/datasets/kp2d/HPatches/"

Evaluation for (320, 240):

ModelRepeatabilityLocalizationC1C3C5MScore
V0*0.6441.0870.4590.8160.8880.518
V1*0.6780.980.4530.8280.9050.552
V2*0.6790.9420.5340.860.9140.573
V30.6850.8850.6020.8360.8860.52
V40.6870.8920.5930.8670.910.546

Evaluation for (640, 480):

ModelRepeatabilityLocalizationC1C3C5MScore
V0*0.6331.1570.450.810.890.486
V1*0.6731.0490.4640.8170.8950.519
V2*0.681.0080.510.8550.9210.544
V30.6820.9720.550.8120.8830.486
V40.6840.9720.5660.840.90.511

*-these models were trained again after submission - the numbers deviate slightly from the paper, however the same trends can be observed.

Over-fitting Examples

These examples show the model over-fitting on single images. For each image, we show the original frame with detected keypoints (left), the score map (center) and the random crop used for training (right). As training progresses, the model learns to detect salient regions in the images.

<p align="center"> <img src="media/gifs/v1.gif" alt="Target Frame" width="230" /> <img src="media/gifs/h1.gif" alt="Heatmap" width="230" /> <img src="media/gifs/w1.gif" alt="Source Frame" width="230" /> </p> <p align="center"> <img src="media/gifs/compressed_v2.gif" alt="Target Frame" width="230" /> <img src="media/gifs/compressed_h2.gif" alt="Heatmap" width="230" /> <img src="media/gifs/compressed_w2.gif" alt="Source Frame" width="230" /> </p>

Qualatitive Results

<p align="center"> <img src="media/imgs/l1.png" alt="Illumination case(1)" width="600" /> <img src="media/imgs/l2.png" alt="Illumination case(2)" width="600" /> </p> <p align="center"> <img src="media/imgs/p1.png" alt="Perspective case(1)" width="600" /> <img src="media/imgs/p2.png" alt="Perspective case(2)" width="600" /> </p> <p align="center"> <img src="media/imgs/r1.png" alt="Rotation case(1)" width="600" /> <img src="media/imgs/r2.png" alt="Rotation case(2)" width="600" /> </p>

License

The source code is released under the MIT license.

Citation

Please use the following citation when referencing our work:

@inproceedings{
tang2020neural,
title={Neural Outlier Rejection for Self-Supervised Keypoint Learning},
author={Jiexiong Tang and Hanme Kim and Vitor Guizilini and Sudeep Pillai and Rares Ambrus},
booktitle={International Conference on Learning Representations},
year={2020}
}