Home

Awesome

pySLAM v2.3.0

Author: Luigi Freda

<!-- TOC --> <!-- /TOC -->

pySLAM is a python implementation of a Visual SLAM pipeline that supports monocular, stereo and RGBD cameras. It provides the following features:

Main Scripts

System overview
Here you can find a couple of diagram sketches that provide an overview of the main system components, and classes relationships and dependencies.

You can use the pySLAM framework as a baseline to experiment with VO techniques, local features, descriptor aggregators, global descriptors, volumetric integration, depth prediction, and create your own (proof of concept) VO/SLAM pipeline in python. When working with it, please keep in mind this is a research framework written in Python and a work in progress. It is not designed for real-time performances.

Enjoy it!

<p align="center" style="margin:0"> <img src="images/STEREO.png" alt="Visual Odometry" height="160" border="0" /> <img src="images/feature-matching.png" alt="Feature Matching" height="160" border="0" /> <img src="images/RGBD2.png" alt="SLAM" height="160" border="0" /> <img src="images/main-rerun-vo-and-matching.png" alt="Feature matching and Visual Odometry" height="160" border="0" /> <img src="images/loop-detection2.png" alt="Loop detection" height="160" border="0" /> <img src="images/kitti-stereo.png" alt="Stereo SLAM" height="160" border="0" /> <img src="images/dense-reconstruction2.png" alt="Dense Reconstruction" height="160" border="0" /> <img src="images/depth-prediction.png" alt="Depth Prediction" height="160" border="0" /> <img src="images/dense-reconstruction-with-depth-prediction.png" alt="Dense Reconstruction with Depth Prediction" height="160" border="0" /> </p>

Install

First, clone this repo and its modules by running

$ git clone --recursive https://github.com/luigifreda/pyslam.git
$ cd pyslam 

Then, use the available specific install procedure according to your OS. The provided scripts will create a single python environment that is able to host all the supported components and models!

Main requirements

If you encounter any issues or performance problems, refer to the TROUBLESHOOTING file for assistance.

Ubuntu

Follow the instructions reported here for creating a new virtual environment pyslam with venv. The procedure has been tested on Ubuntu 18.04, 20.04, 22.04 and 24.04.

If you prefer conda, run the scripts described in this other file.

MacOS

Follow the instructions in this file. The reported procedure was tested under Sequoia 15.1.1 and Xcode 16.1.

Docker

If you prefer docker or you have an OS that is not supported yet, you can use rosdocker:

How to install non-free OpenCV modules

The provided install scripts will install a recent opencv version (>=4.10) with non-free modules enabled (see the provided scripts install_pip3_packages.sh and install_opencv_python.sh). To quickly verify your installed opencv version run:
$ . pyenv-activate.sh
$ ./scripts/opencv_check.py
or use the following command:
$ python3 -c "import cv2; print(cv2.__version__)"
How to check if you have non-free OpenCV module support (no errors imply success):
$ python3 -c "import cv2; detector = cv2.xfeatures2d.SURF_create()"

Troubleshooting and performance issues

If you run into issues or errors during the installation process or at run-time, please, check the docs/TROUBLESHOOTING.md file.


Usage

Once you have run the script install_all_venv.sh (follow the instructions above according to your OS), you can open a new terminal and run:

$ . pyenv-activate.sh   #  Activate pyslam python virtual environment. This is only needed once in a new terminal.
$ ./main_vo.py

This will process a default KITTI video (available in the folder data/videos) by using its corresponding camera calibration file (available in the folder settings), and its groundtruth (available in the same data/videos folder). If matplotlib windows are used, you can stop main_vo.py by focusing/clicking on one of them and pressing the key 'Q'. Note: As explained above, the basic script main_vo.py strictly requires a ground truth.

In order to process a different dataset, you need to set the file config.yaml:

Similarly, you can test main_slam.py by running:

$ . pyenv-activate.sh   #  Activate pyslam python virtual environment. This is only needed once in a new terminal.
$ ./main_slam.py

This will process a default KITTI video (available in the folder data/videos) by using its corresponding camera calibration file (available in the folder settings). You can stop it by focusing/clicking on one of the opened matplotlib windows and pressing the key 'Q'. Note: Due to information loss in video compression, main_slam.py tracking may peform worse with the available KITTI videos than with the original KITTI image sequences. The available videos are intended to be used for a first quick test. Please, download and use the original KITTI image sequences as explained below.

Feature tracking

If you just want to test the basic feature tracking capabilities (feature detector + feature descriptor + feature matcher) and get a taste of the different available local features, run

$ . pyenv-activate.sh   #  Activate pyslam python virtual environment. This is only needed once in a new terminal.
$ ./main_feature_matching.py

In any of the above scripts, you can choose any detector/descriptor among ORB, SIFT, SURF, BRISK, AKAZE, SuperPoint, etc. (see the section Supported Local Features below for further information).

Some basic examples are available in the subfolder test/loopclosing. In particular, as for feature detection/description, you may want to take a look at test/cv/test_feature_manager.py too.

Loop closing

Different loop closing methods are available, combining aggregation methods and global descriptors. Loop closing is enabled by default and can be disabled by setting kUseLoopClosing=False in config_parameters.py. Configuration options can be found in loop_closing/loop_detector_configs.py.

Examples: Start with the examples in test/loopclosing, such as test/loopclosing/test_loop_detector.py.

Vocabulary management

DBoW2, DBoW3, and VLAD require pre-trained vocabularies. The first step is to generate an array of descriptors from a set of reference images. Then, a vocabulary can be trained on it.

  1. Generate descriptors array: Use test/loopclosing/test_gen_des_array_from_imgs.py to generate the array of descriptors for training a vocabulary. Select your desired descriptor type via the tracker configuration.

  2. DBOW vocabulary generation: Train your target vocabulary by using the script test/loopclosing/test_gen_dbow_voc_from_des_array.py.

  3. VLAD vocabulary generation: Train your target VLAD "vocabulary" by using the script test/loopclosing/test_gen_vlad_voc_from_des_array.py.

Vocabulary-free loop closing

Most methods do not require pre-trained vocabularies. Specifically:

As mentioned above, only DBoW2, DBoW3, and VLAD require pre-trained vocabularies.

Volumetric reconstruction pipeline

The volumetric reconstruction pipeline is disabled by default. You can enable it by setting kUseVolumetricIntegration=True in config_parameters.py. This runs in the back-end. At present, it works with:

If you want a mesh as output set kVolumetricIntegrationExtractMesh=True in config_parameters.py.

Depth prediction

The available depth prediction models can be utilized both in the SLAM back-end and front-end.

Refer to the file depth_estimation/depth_estimator_factory.py for further details. Both stereo and monocular prediction approaches are supported. You can test depth prediction/estimation by using the script main_depth_prediction.py.

Notes:

Save and reload a map

When you run the script main_slam.py:

Relocalization in a loaded map

To enable map reloading and relocalization in it, open config.yaml and set

SYSTEM_STATE:
  load_state: True               # flag to enable SLAM state reloading (map state + loop closing state)
  folder_path: data/slam_state   # folder path relative to root of this repository

Pressing the Save button saves the current map, front-end, and backend configurations. Reloading a saved map overwrites the current system configurations to ensure descriptor compatibility.

Trajectory saving

Estimated trajectories can be saved in three different formats: TUM (The Open Mapping format), KITTI (KITTI Odometry format), and EuRoC (EuRoC MAV format). To enable trajectory saving, open config.yaml and search for the SAVE_TRAJECTORY: set save_trajectory: True, select your format_type (tum, kitti, euroc), and the output filename. For instance for a tum format output:

SAVE_TRAJECTORY:
  save_trajectory: True
  format_type: tum
  filename: kitti00_trajectory.txt

SLAM GUI

Some quick information about the non-trivial GUI buttons of main_slam.py:

Monitor the logs for tracking, local mapping, and loop closing simultaneously

The logs generated by the modules local_mapping.py, loop_closing.py, loop_detecting_process.py, and global_bundle_adjustments.py are collected in the files local_mapping.log, loop_closing.log, loop_detecting.log, and gba.log, which are all stored in the folder logs. For fun/debugging, you can monitor each parallel flow by running the following command in a separate shell:
$ tail -f logs/<log file name>
Otherwise, just run the script:
$ ./scripts/launch_tmux_slam.sh
from the repo root folder. Press CTRL+A and then CTRL+Q to exit from tmux environment.


System overview

Here you can find a couple of diagram sketches that provide an overview of the main system components, and classes relationships and dependencies. Writing a proper documentation is a work in progress.


Supported components and models

Supported local features

At present time, the following feature detectors are supported:

The following feature descriptors are supported:

For more information, refer to local_features/feature_types.py file. Some of the local features consist of a joint detector-descriptor. You can start playing with the supported local features by taking a look at test/cv/test_feature_manager.py and main_feature_matching.py.

In both the scripts main_vo.py and main_slam.py, you can create your preferred detector-descritor configuration and feed it to the function feature_tracker_factory(). Some ready-to-use configurations are already available in the file local_features/feature_tracker.configs.py

The function feature_tracker_factory() can be found in the file local_features/feature_tracker.py. Take a look at the file local_features/feature_manager.py for further details.

N.B.: You just need a single python environment to be able to work with all the supported local features!

Supported matchers

See the file local_features/feature_matcher.py for further details.

Supported global descriptors and local descriptor aggregation methods

Local descriptor aggregation methods

NOTE: iBoW and OBIndex2 incrementally build a binary image index and do not need a prebuilt vocabulary. In the implemented classes, when needed, the input non-binary local descriptors are transparently transformed into binary descriptors.

Global descriptors

Also referred to as holistic descriptors:

Different loop closing methods are available. These combines the above aggregation methods and global descriptors. See the file loop_closing/loop_detector_configs.py for further details.

Supported depth prediction models

Both monocular and stereo depth prediction models are available. SGBM algorithm has been included as a classic reference approach.


Datasets

Five different types of datasets are available:

Datasettype in config.yaml
KITTI odometry data set (grayscale, 22 GB)type: KITTI_DATASET
TUM datasettype: TUM_DATASET
EUROC datasettype: EUROC_DATASET
Video filetype: VIDEO_DATASET
Folder of imagestype: FOLDER_DATASET

Use the download scripts available in the folder scripts to download some of the following datasets.

KITTI Datasets

pySLAM code expects the following structure in the specified KITTI path folder (specified in the section KITTI_DATASET of the file config.yaml). :

├── sequences
    ├── 00
    ...
    ├── 21
├── poses
    ├── 00.txt
        ...
    ├── 10.txt

  1. Download the dataset (grayscale images) from http://www.cvlibs.net/datasets/kitti/eval_odometry.php and prepare the KITTI folder as specified above

  2. Select the corresponding calibration settings file (section KITTI_DATASET: cam_settings: in the file config.yaml)

TUM Datasets

pySLAM code expects a file associations.txt in each TUM dataset folder (specified in the section TUM_DATASET: of the file config.yaml).

  1. Download a sequence from http://vision.in.tum.de/data/datasets/rgbd-dataset/download and uncompress it.
  2. Associate RGB images and depth images using the python script associate.py. You can generate your associations.txt file by executing:
$ python associate.py PATH_TO_SEQUENCE/rgb.txt PATH_TO_SEQUENCE/depth.txt > associations.txt
  1. Select the corresponding calibration settings file (section TUM_DATASET: cam_settings: in the file config.yaml).

EuRoC Datasets

  1. Download a sequence (ASL format) from http://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets (check this direct link)
  2. Use the script io/generate_euroc_groundtruths_as_tum.sh to generate the TUM-like groundtruth files path + '/' + name + '/mav0/state_groundtruth_estimate0/data.tum' that are required by the EurocGroundTruth class.
  3. Select the corresponding calibration settings file (section EUROC_DATASET: cam_settings: in the file config.yaml).

Replica Datasets

  1. You can download the zip file containing all the sequences by running:
    $ wget https://cvg-data.inf.ethz.ch/nice-slam/data/Replica.zip
  2. Then, uncompress it and deploy the files as you wish.
  3. Select the corresponding calibration settings file (section REPLICA_DATASET: cam_settings: in the file config.yaml).

Camera Settings

The folder settings contains the camera settings files which can be used for testing the code. These are the same used in the framework ORB-SLAM2. You can easily modify one of those files for creating your own new calibration file (for your new datasets).

In order to calibrate your camera, you can use the scripts in the folder calibration. In particular:

  1. Use the script grab_chessboard_images.py to collect a sequence of images where the chessboard can be detected (set the chessboard size therein, you can use the calibration pattern calib_pattern.pdf in the same folder)
  2. Use the script calibrate.py to process the collected images and compute the calibration parameters (set the chessboard size therein)

For more information on the calibration process, see this tutorial or this other link.

If you want to use your camera, you have to:


Comparison pySLAM vs ORB-SLAM3

For a comparative evaluation of the trajectories estimated by pySLAM and by ORB-SLAM3, see this trajectory comparison notebook.

Note that pySLAM saves its pose estimates in an online fashion: At each frame, the current pose estimate is saved at the end of the front-end tracking iteration. On the other end, ORB-SLAM3 pose estimates are saved at the end of the full dataset playback: That means each pose estimate $q$ of ORB-SLAM is refined multiple times by LBA and BA over the multiple window optimizations that cover $q$.

You can save your pyslam trajectories as detailed here.


Contributing to pySLAM

If you like pySLAM and would like to contribute to the code base, you can report bugs, leave comments and proposing new features through issues and pull requests on github. Feel free to get in touch at luigifreda(at)gmail[dot]com. Thank you!


References

Suggested books:

Suggested material:

Moreover, you may want to have a look at the OpenCV guide or tutorials.


Credits


TODOs

Many improvements and additional features are currently under development: