Awesome
SFM-AR-Visual-SLAM
Visual SLAM
GSLAM
General SLAM Framework which supports feature based or direct method and different sensors including monocular camera, RGB-D sensors or any other input types can be handled. https://github.com/zdzhaoyong/GSLAM
OKVIS: Open Keyframe-based Visual-Inertial SLAM
http://ethz-asl.github.io/okvis/index.html
Uncertainty-aware Receding Horizon Exploration and Mapping Planner
https://github.com/unr-arl/rhem_planner
S-PTAM: Stereo Parallel Tracking and Mapping
mcptam
MCPTAM is a set of ROS nodes for running Real-time 3D Visual Simultaneous Localization and Mapping (SLAM) using Multi-Camera Clusters. It includes tools for calibrating both the intrinsic and extrinsic parameters of the individual cameras within the rigid camera rig.
https://github.com/aharmat/mcptam
FAB-MAP
visual place recognition algorithm https://github.com/arrenglover/openfabmap
rat-SLAM
https://github.com/davidmball/ratslam
maplab
An Open Framework for Research in Visual-inertial Mapping and Localization https://github.com/ethz-asl/maplab from Roland Siegwart
OpenVSLAM: Versatile Visual SLAM Framework
https://github.com/xdspacelab/openvslam
SLAM with Apriltag
https://github.com/berndpfrommer/tagslam ROS ready, bag file available
SE2 SLAM fusing odom and Vision
https://github.com/izhengfan/se2clam
RGB-D Visual SLAM
Fast Odometry and Scene Flow from RGB-D Cameras
https://github.com/MarianoJT88/Joint-VO-SF published in ICRA 2017
Real-Time Appearance-Based Mapping
http://wiki.ros.org/rtabmap_ros ... Many Demos are available in the website with Several ROS bags
general and scalable framework for visual SLAM
https://github.com/strasdat/ScaViSLAM/
https://github.com/felixendres/rgbdslam_v2 ROS ready, It accompany a PHD thesis from TUM
SLAM in unstructed environments
https://github.com/tu-darmstadt-ros-pkg/hector_slam
Dense Visual Odometry and SLAM (dvo_slam)
https://github.com/tum-vision/dvo_slam
Coslam: Collaborative visual slam in dynamic environments
https://github.com/danping/CoSLAM
Real-time dense visual SLAM system : ElasticFusion
https://github.com/mp3guy/ElasticFusion ... it has nice gui and dataset , paper and video too .
Real-time dense visual SLAM
https://github.com/mp3guy/Kintinuous
Deferred Triangulation SLAM
Based on PTAM and SLAM track 3d traingulated and 2d non triangulated features . https://github.com/plumonito/dtslam
Dense RGBD slam
https://github.com/dorian3d/RGBiD-SLAM
M2SLAM: Visual SLAM with Memory Management for large-scale Environments
https://github.com/lifunudt/M2SLAM
SceneLib2 - MonoSLAM open-source library
from oxford university c++ SLAM
https://github.com/hanmekim/SceneLib2
next best view planner
https://github.com/ethz-asl/nbvplanner
Dynamic RGB-D Encoder SLAM for a Differential-Drive Robot
https://github.com/ydsf16/dre_slam ROS kinetic, openCV 4.0, yolo v3, Ceres
DynaSLAM: Tracking, Mapping and Inpainting in Dynamic Scenes
https://github.com/BertaBescos/DynaSLAM
Augmented Reality
PTAM (Parallel Tracking and Mapping) :
http://www.robots.ox.ac.uk/~gk/PTAM/
PTAM Android :
https://github.com/damienfir/android-ptam
Monocular SLAM
ORB-SLAM: A Versatile and Accurate Monocular SLAM System
https://github.com/raulmur/ORB_SLAM ....
its modification : ORB-SLAM2 is a real-time SLAM library for Monocular, Stereo and RGB-D cameras https://github.com/raulmur/ORB_SLAM2
its modification to work on IOS : https://github.com/Thunderbolt-sx/ORB_SLAM_iOS
ORB-SLAM3 An Accurate Open-Source Library for Visual, Visual-Inertial and Multi-Map SLAM
https://github.com/UZ-SLAMLab/ORB_SLAM3
REMODE (REgularized MOnocular Depth Estimation)
https://github.com/uzh-rpg/rpg_open_remode ... Probabilistic, Monocular Dense Reconstruction in Real Time
Fast Semi-Direct Monocular Visual Odometry
https://github.com/pizzoli/rpg_svo
Fast Semi-Direct Visual Odometry for Monocular, Wide Angle, and Multi-camera Systems
no loop closure or bundle adjustment http://rpg.ifi.uzh.ch/svo2.html
LSD-SLAM: Large-Scale Direct Monocular SLAM
https://github.com/tum-vision/lsd_slam
modification over the original package to work with rolling chatter camera ( cheap webcams) https://github.com/FirefoxMetzger/lsd_slam The change is mentioned in this video : https://www.youtube.com/watch?v=TZRICW6R24o
ROS wrapper for visolib
https://github.com/srv/viso2 It is supported till ROS-indigo.
Visual-Inertia-fusion-based Monocular dEnse mAppiNg
https://github.com/HKUST-Aerial-Robotics/VI-MEAN with paper and video ICRA 2017 , rosbag as well.
monocular object pose SLAM
https://github.com/shichaoy/cube_slam
DeepFactors: Real-Time Probabilistic Dense Monocular SLAM
ORB-SLAM RGBD + Inertial
https://github.com/xiefei2929/ORB_SLAM3-RGBD-Inertial
LIDAR based
LIMO: Lidar-Monocular Visual Odometry
https://github.com/johannes-graeter/limo Virtual machine with all the dependencies is ready.
LiDAR-based real-time 3D localization and mapping
https://github.com/erik-nelson/blam
segmatch
https://github.com/ethz-asl/segmatch A 3D segment based loop-closure algorithm | ROS ready
LIO-SAM
https://github.com/TixiaoShan/LIO-SAM real-time lidar-inertial odometry
UV-SLAM: Unconstrained Line-based SLAM Using Vanishing Points for Structural Mapping | ICRA'22 https://github.com/url-kaist/UV-SLAM
Visual Odometry
Dense Sparse odometry
https://github.com/JakobEngel/dso
monocular odometry algorithm
https://github.com/alejocb/dpptam Dense Piecewise Planar Tracking and Mapping from a Monocular Sequence IROS 2015
Stereo Visual odometry
https://github.com/rubengooj/StVO-PL Stereo Visual Odometry by combining point and line segment features
Monocular Motion Estimation on Manifolds
https://github.com/johannes-graeter/momo
Visual Odometry Revisited: What Should Be Learnt?
paper + pytorch code: https://github.com/Huangying-Zhan/DF-VO
SimVODIS Simultaneous Visual Odometry, Object Detection, and Instance Segmentation
https://github.com/Uehwan/SimVODIS
Modality-invariant Visual Odometry for Embodied Vision
RGB only OR RGB + Depth https://memmelma.github.io/vot/
Visual Inertial odometry
Kalibr
IMU camera calibration toolbox and more. https://github.com/ethz-asl/kalibr
Camera-to-IMU calibration toolbox https://github.com/hovren/crisp
ROVIO
Robust Visual Inertial Odometry https://github.com/ethz-asl/rovio
Robust Stereo Visual Inertial Odometry for Fast Autonomous Flight
https://github.com/KumarRobotics/msckf_vio
A Robust and Versatile Monocular Visual-Inertial State Estimator
https://github.com/HKUST-Aerial-Robotics/VINS-Mono
VINS modification for omnidirectional + Streo camera
https://github.com/gaowenliang/vins_so
Realtime Edge Based Inertial Visual Odometry for a Monocular Camera
https://github.com/JuanTarrio/rebvo Specially targetted to embedded hardware.
robocentric visual-inertial odometry
https://github.com/rpng/R-VIO Monocular camera + 6 DOF IMU
SFM
Structure from Motion (SfM) for Unordered Image Collections
https://github.com/TheFrenchLeaf/Bundle
Android SFM
https://github.com/danylaksono/Android-SfM-client
Five Point , 6,7,8 algorithms
open geometrical vision https://github.com/marknabil/opengv
openSFM
Structure from Motion library written in Python on top of OpenCV. It has dockerfile for all installation on ubuntu 14.04 https://github.com/mapillary/OpenSfM
Unsupervised Learning of Depth and Ego-Motion from Video
An unsupervised learning framework for depth and ego-motion estimation from monocular videos https://github.com/tinghuiz/SfMLearner
CVPR 2015 Tutorial for open source SFM
Source material for the CVPR 2015 Tutorial: Open Source Structure-from-Motion https://github.com/mleotta/cvpr2015-opensfm
Unsupervised Learning of Depth and Ego-Motion from Video
https://github.com/tinghuiz/SfMLearner
Deep Permutation Equivariant Structure from Motion
https://github.com/drormoran/Equivariant-SFM
concepts in matlab
http://vis.uky.edu/~stewe/FIVEPOINT/
SFMedu: A Matlab-based Structure-from-Motion System for Education https://github.com/jianxiongxiao/SFMedu
Lorenzo Torresani's Structure from Motion Matlab code https://github.com/scivision/em-sfm
https://github.com/vrabaud/sfm_toolbox
OpenMVG C++ library https://github.com/openMVG/openMVG
collection of computer vision methods for solving geometric vision problems https://github.com/laurentkneip/opengv
Multiview Geometry Library in C++11
Quaternion Based Camera Pose Estimation From Matched Feature Points
https://sites.google.com/view/kavehfathian/code its paper : https://arxiv.org/pdf/1704.02672.pdf
Mapping
Direct Sparse Mapping
https://github.com/jzubizarreta/dsm
Volumetric 3D Mapping in Real-Time on a CPU
https://github.com/tum-vision/fastfusion
Others :
SLAM with IMU on Android
https://github.com/knagara/SLAMwithCameraIMUforAndroid
IOS iphone 7 plus
https://github.com/HKUST-Aerial-Robotics/VINS-Mobile
Matlab
with some good documentation to how to read the image and so on from the kinect . https://github.com/AutoSLAM/SLAM
Datasets and benchmarking
Curated List of datasets:
https://github.com/youngguncho/awesome-slam-datasets
igibson
simulation environment providing fast visual rendering and physics simulation based on Bullet https://svl.stanford.edu/igibson/
EuRoC MAV Dataset
http://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets
visual-inertial datasets collected on-board a Micro Aerial Vehicle (MAV). The datasets contain stereo images, synchronized IMU measurements, and accurate motion and structure ground-truth.
TUM VI Benchmark for Evaluating Visual-Inertial Odometry
https://vision.in.tum.de/data/datasets/visual-inertial-dataset different scenes for evaluating VI odometry
Authentic Dataset for Visual-Inertial Odometry
https://github.com/AaltoVision/ADVIO
challenging Visual Inertial Odometry benchmark
https://daniilidis-group.github.io/penncosyvio/ from Pennsylvania, published in ICRA2017
ICL NIUM
https://www.doc.ic.ac.uk/~ahanda/VaFRIC/iclnuim.html benchmarking RGB-D, Visual Odometry and SLAM algorithms
Benchmarking Pose Estimation Algorithms
https://sites.google.com/view/kavehfathian/code/benchmarking-pose-estimation-algorithms
Toolbox for quantitative trajectory evaluation of VO/VIO
https://github.com/uzh-rpg/rpg_trajectory_evaluation
Photorealistic Simulator for VIO testing/benchmarking
https://github.com/mit-fast/FlightGoggles
Machine Learning/ Deep learning based
Learning monocular visual odometry with dense 3D mapping from dense 3D flow
DeepVO: A Deep Learning approach for Monocular Visual Odometry
Survey papers and articles
Survey with year,sensor used and best practice
Imperial college ICCV 2015 workshop
Deep Auxiliary Learning for Visual Localization and Odometry
follow :
Robotics and Perception Group
TUM VISION
handheld AR
http://studierstube.icg.tugraz.at/handheld_ar/cityofsights.php
Another Curated list
for SFM, 3D reconstruction and V-SLAM https://github.com/openMVG/awesome_3DReconstruction_list