Home

Awesome

Event-Based Visual Place Recognition With Ensembles of Temporal Windows

License: CC BY-NC-SA 4.0 stars GitHub issues GitHub repo size <a href="https://qcr.github.io" alt="QUT Centre for Robotics Open Source"><img src="https://img.shields.io/badge/collection-QUT%20Robotics-%23043d71?style=flat-square" /></a>

License + Attribution

This code is licensed under CC BY-NC-SA 4.0. Commercial usage is not permitted. If you use this dataset or the code in a scientific publication, please cite the following paper (preprint and additional material):

@article{fischer2020event,
  title={Event-Based Visual Place Recognition With Ensembles of Temporal Windows},
  author={Fischer, Tobias and Milford, Michael},
  journal={IEEE Robotics and Automation Letters},
  volume={5},
  number={4},
  pages={6924--6931},
  year={2020}
}

The Brisbane-Event-VPR dataset accompanies this code repository: https://zenodo.org/record/4302805

Dataset preview

Code overview

The following code is available:

Please note that in our paper we used manually annotated and then interpolated correspondences; instead here we provide matches based on the GPS data. Therefore, the results between what is reported in the paper and what is obtained using the methods here will be slightly different.

Reconstruct videos from events

  1. Clone this repository: git clone https://github.com/Tobias-Fischer/ensemble-event-vpr.git

  2. Clone https://github.com/cedric-scheerlinck/rpg_e2vid and follow the instructions to create a conda environment and download the pretrained models.

  3. Download the Brisbane-Event-VPR dataset.

  4. Now convert the bag files to txt/zip files that can be used by the event2video code: python convert_rosbags.py. Make sure to adjust the path to the extract_events_from_rosbag.py file from the rpg_e2vid repository.

  5. Now do the event to video conversion: python reconstruct_videos.py. Make sure to adjust the path to the run_reconstruction.py file from the rpg_e2vid repository.

Create suitable conda environment

  1. Create a new conda environment with the dependencies: conda create --name brisbaneeventvpr tensorflow-gpu pynmea2 scipy matplotlib numpy tqdm jupyterlab opencv pip ros-noetic-rosbag ros-noetic-cv-bridge python=3.8 -c conda-forge -c robostack

Export RGB frames from rosbags

  1. conda activate brisbaneeventvpr

  2. python export_frames_from_rosbag.py

Event-based VPR with ensembles

  1. Create a new conda environment with the dependencies: conda create --name brisbaneeventvpr tensorflow-gpu pynmea2 scipy matplotlib numpy tqdm jupyterlab opencv pip

  2. conda activate brisbaneeventvpr

  3. git clone https://github.com/QVPR/netvlad_tf_open.git

  4. cd netvlad_tf_open && pip install -e .

  5. Download the NetVLAD checkpoint here (1.1 GB). Extract the zip and move its contents to the checkpoints folder of the netvlad_tf_open repository.

  6. Open the Brisbane Event VPR.ipynb and adjust the path to the dataset_folder.

  7. You can now run the code in Brisbane Event VPR.ipynb.

Related works

Please check out this collection of related works on place recognition.