Home

Awesome

General Place Recognition Survey: Towards Real-World Autonomy

<!-- <p align="center"> <img src="https://github.com/ContinualAI/continual-learning-papers/blob/main/logo.png" alt="ContinualAI logo"/ width="300px" align="center"> </p> --> <!-- Continual Learning papers list, curated by ContinualAI. **Search among 325 papers!** You can browse the list in this file or interactively on the [ContinualAI website](https://www.continualai.org/papers/). [Join our community](https://continualai.herokuapp.com/) on Slack to stay updated with the latest Continual Learning news. Visit the Continua AI wiki &rarr; http://wiki.continualai.org/ -->

General Place Recognition papers list. Search among 170 papers!

Table of contents

<!-- [Add a new paper](https://github.com/ContinualAI/continual-learning-papers#add-a-new-paper) --> <!-- [Join the ContinualAI Zotero group](https://github.com/ContinualAI/continual-learning-papers#join-the-continualai-zotero-group) -->

Existing Datasets

TopicNameYearImage typeEnvironmentIlluminationViewpointGround TruthLabelsExtra Information
GenericNew College and City Centre2008RGBOutdoorslight:heavy_check_mark::heavy_check_mark::heavy_check_mark:GPS
New College Vision and Laser 2009Gray.Outdoorslight:heavy_check_mark::heavy_check_mark:GPS, IMU, LiDAR
Rawseeds2006RGBIndoor/Outdoor:heavy_check_mark::heavy_check_mark:GPS, LiDAR
Ford Campus2011RGBUrbanslight:heavy_check_mark:GPS, IMU, LiDAR
Malaga Parking 6L2009RGBOutdoor:heavy_check_mark:GPS, IMU, LiDAR
KITTI Odometry2012Gray./ RGBUrbanslight:heavy_check_mark:GPS, IMU, LiDAR
Long-termSt. Lucia2010RGBUrban:heavy_check_mark:slightGPS
COLD2009RGBIndoor:heavy_check_mark::heavy_check_mark::heavy_check_mark::heavy_check_mark:LiDAR
Oxford RobotCar2017RGBUrban:heavy_check_mark::heavy_check_mark:GPS, IMU, LiDAR
Gardens Point Walking2014RGBIndoor/ Outdoor:heavy_check_mark::heavy_check_mark:-
MSLS2020RGBUrban:heavy_check_mark::heavy_check_mark::heavy_check_mark:GPS
Across seasonsNurburgring and Alderley2012RGBUrban:heavy_check_mark::heavy_check_mark::heavy_check_mark:-
Nordland2013RGBOutdoor:heavy_check_mark::heavy_check_mark:GPS
CMU2011RGBUrban:heavy_check_mark::heavy_check_mark::heavy_check_mark:GPS
Freiburg (FAS)2014RGBUrban:heavy_check_mark::heavy_check_mark::heavy_check_mark:GPS
VPRiCE2015RGBOutdoor:heavy_check_mark::heavy_check_mark:-
RGB-DTUM RGB-D2012RGB-DIndoor:heavy_check_mark::heavy_check_mark:IMU
Microsoft 7-Scenes2013RGB-DIndoor:heavy_check_mark::heavy_check_mark::heavy_check_mark:-
ICL-NUIM2014RGB-DIndoor:heavy_check_mark::heavy_check_mark:-
SemanticKITTI Semantic2019RGBUrban:heavy_check_mark::heavy_check_mark:GPS, IMU, LiDAR
Cityscapes2016RGBUrban:heavy_check_mark::heavy_check_mark:GPS
CSC2019RGBOutdoor:heavy_check_mark::heavy_check_mark:LiDAR
Train networksCambridge Landmarks2015RGBOutdoor:heavy_check_mark::heavy_check_mark::heavy_check_mark::heavy_check_mark:-
Pittsburgh250k2013RGBUrban:heavy_check_mark::heavy_check_mark::heavy_check_mark::heavy_check_mark:GPS
Tokyo 24/72015RGBUrban:heavy_check_mark::heavy_check_mark::heavy_check_mark:GPS
SPED2017RGBOutdoor:heavy_check_mark::heavy_check_mark:-
Omni-directionalNew College Vision and Laser2009Gray.Outdoorslight:heavy_check_mark::heavy_check_mark:GPS, IMU, LiDAR
MOLP2018Gray./DOutdoor:heavy_check_mark::heavy_check_mark:GPS
NCLT2016RGBOutdoor:heavy_check_mark::heavy_check_mark::heavy_check_mark:GPS, LiDAR
Aerial/UAVShopping Street 1/22018Gray.Urbanslight:heavy_check_mark::heavy_check_mark:-
EuRoC2016Gray.Indoor:heavy_check_mark::heavy_check_mark:IMU
UnderwaterUWSim2016RGBUnder-water:heavy_check_mark:GPS
Range sensorsMulRan20203D Point cloudsUrban:heavy_check_mark::heavy_check_mark:LiDAR, RADAR
<!-- ## Add a new paper The list of papers is maintained through a Zotero group. You can join the group and help us keeping it updated (see next section). If you don't want to join the group, you can simply open a Github issue to suggest us a new paper (or even more than one). We will take care of adding it to the list as soon as possible. 1. Open a new Github issue. 2. Attach your bib file containing the paper you want to include in the list. If you don't have a bib file, just provide us with the link to the paper. The link should point to a location where paper metadata can be appropriately retrieved by common reference managers. Alternatively, you can submit a Pull Request with a modification to the bibtex files directly! ## Join the ContinualAI Zotero group You can give your contribution to the group by **adding new papers** or by helping **annotating the existing ones**. 1. Join our [Zotero group](https://www.zotero.org/groups/2623909/continual_learning_papers/) 2. To **add a new paper** 2.1. Add it to the group folder which best represents the paper contribution. Read some advices below if you are uncertain on this. You can add the paper from your library or directly from the paper webpage through the Zotero web browser plugin. 2.2 Make sure that at least `title`, `authors`, `item type` and `publication` are specified. The `year` must be put inside `date` field. 2.3 Also put a link to the paper in the `url` field. 3. To **annotate** an existing paper 3.1. Check the list of existing tags in `tags.csv` file. If you want to add a new tag, please add it in there and submit a Pull Request. 3.2. Add your tags in the `Tags` tab of Zotero. Please, remember to write the tag in square brackets e.g. `[mytag]` 3.3. Add your notes in the `Notes` tab of Zotero. We will periodically export the bibtex to keep the list updated. In case we forgot, join the [ContinualAI Slack](https://continualai.herokuapp.com/) and complain about our behavior in the `#wiki` channel. ### Advices to add new papers in Zotero * Check if the paper already exist by using the `Citation Key` or the title in Zotero search bar. * Don't forget to add the publication venue (Journal, Proceedings...). Use `publication = arXiv` if the paper is a preprint. * We use a system based on categories. This can sometimes be limiting. In general, please consider to add the paper in the category which you consider the most relevant one. You can add the paper in at most **2** categories, if you believe that both are equally relevant. * Please, do not add new tags if a similar category already exists. ---------------------------- --> <!-- # List of papers -->

<a id="review_paper"></a>

Review Papers

<a id="representation"></a>

Representation

A. Low-Level Representation

<a id="camera_based_approach"></a>

A.1 Camera-Related Approaches

<a id="range_based_approach"></a>

A.2 Range Sensor-Related Approaches

<a id="high_level_representation"></a>

B. High-Level Representation

<a id="graph"></a>

B.1 Graph:

<a id="embeddings"></a>

B.2 Embeddings:

<a id="recognize_challenge"></a>

Recognizing the Right Place Aginst Challenges

<a id="appearance_change"></a>

A. Appearance Change

A.1 Place Modeling

A.2 Place Matching with Sequences

<a id="viewpoint_diference"></a>

B. Viewpoint Difference

<a id="generalization_ability"></a>

C. Generalization Ability

<a id="efficiency"></a>

D. Efficiency

<a id="uncertainty_estimation"></a>

E. Uncertainty Estimation

<a id="application"></a>

Application & Trends

<a id="navigaiton"></a>

A. Long-Term & Large-Scale Navigation

<a id="vtrn"></a>

B. Visual Terrain Relative Navigation

<a id="multi-agent-slam"></a>

C. Multi-Agent Localization and Mapping

<a id="lifelong"></a>

D. Bio-Inspired and Lifelong Autonomy

<a id="development_tools"></a>

Development Tools

<a id="dataset"></a>

A. Public Datasets

<a id="libraries"></a>

B. Supported Libraries

<!-- [25] R. Arandjelovic, P. Gronat, A. Torii, T. Pajdla, and J. Sivic, Netvlad:Cnn architecture for weakly supervised place recognition, in 2016IEEE Conference on Computer Vision and Pattern Recognition(CVPR), 2016, pp. 5297–5307. [28] G. Berton, C. Masone, and B. Caputo, Rethinking visual geo-localization for large-scale applications, in Proceedings of theIEEE/CVF Conference on Computer Vision and Pattern Recognition,2022, pp. 4878–4888. [82] A. Ali-Bey, B. Chaib-Draa, and P. Giguere, Mixvpr: Feature mixingfor visual place recognition, in Proceedings of the IEEE/CVF WinterConference on Applications of Computer Vision, 2023, pp. 2998–3007. [93] Z. Fan, Z. Song, H. Liu, Z. Lu, J. He, and X. Du, Svt-net: Super light-weight sparse voxel transformer for large scale place recognition, inProceedings of the AAAI Conference on Artificial Intelligence, vol. 36,no. 1, 2022, pp. 551–560. [29] R. Wang, Y. Shen, W. Zuo, S. Zhou, and N. Zheng, Transvpr:Transformer-based place recognition with multi-level attention aggre-gation, in Proceedings of the IEEE/CVF Conference on ComputerVision and Pattern Recognition, 2022, pp. 13 648–13 657. [55] L. Luo, S. Zheng, Y. Li, Y. Fan, B. Yu, S.-Y. Cao, J. Li, and H.-L. Shen,Bevplace: Learning lidar-based place recognition using bird’s eye viewimages, in Proceedings of the IEEE/CVF International Conference onComputer Vision, 2023, pp. 8700–8709. [52] K. Vidanapathirana, M. Ramezani, P. Moghadam, S. Sridharan, andC. Fookes, Logg3d-net: Locally guided global descriptor learning for3d place recognition, in 2022 International Conference on Roboticsand Automation (ICRA), 2022, pp. 2215–2221. [50] J. Komorowski, Minkloc3d: Point cloud based large-scale place recog-nition, in 2021 IEEE Winter Conference on Applications of ComputerVision (WACV), 2021, pp. 1789–1798. [45] M. A. Uy and G. H. Lee, Pointnetvlad: Deep point cloud basedretrieval for large-scale place recognition, in 2018 IEEE/CVF Con-ference on Computer Vision and Pattern Recognition, 2018, pp. 4470–4479. [35] J. Komorowski, M. Wysoczanska, and T. Trzcinski, Minkloc++:lidar and monocular image fusion for place recognition, in 2021International Joint Conference on Neural Networks (IJCNN). IEEE,2021, pp. 1–8. -->