Home

Awesome

CircleCI

flame_ros

FLaME (Fast Lightweight Mesh Estimation) is a lightweight, CPU-only method for dense online monocular depth estimation. Given a sequence of camera images with known poses, FLaME is able to reconstruct dense 3D meshes of the environment by posing the depth estimation problem as a variational optimization over a Delaunay graph that can be solved at framerate, even on computationally constrained platforms.

The flame_ros repository contains the ROS bindings, visualization code, and offline frontends for the algorithm. The core library can be found here.

<p align="center"> <a href="https://www.youtube.com/watch?v=vB_F-Sj0AX0"> <img src="https://img.youtube.com/vi/vB_F-Sj0AX0/0.jpg" alt="FLaME"> </a> </p>

Related Publications:

Author

Quickstart

Build the provided Docker image and run an example dataset (requires nvidia-docker for rviz):

# Build the image.
cd flame_ros
docker build --rm -t flame -f scripts/Dockerfile .

# Run an example dataset.
./scripts/flame_docker_example.sh

You may need to run xhost +local:root in order to forward rviz outside the container.

Dependencies

Installation

NOTE: These instructions assume you are running ROS Kinetic on Ubuntu 16.04 and are interested in installing both flame and flame_ros. See the installation instructions for flame if you only wish to install flame.

  1. Install apt dependencies:
sudo apt-get install libboost-all-dev libpcl-dev python-catkin-tools
  1. Create a Catkin workspace using catkin_tools:
# Source ROS.
source /opt/ros/kinetic/setup.sh

# Create workspace source folder.
mkdir -p flame_ws/src

# Checkout flame and flame_ros into workspace.
cd flame_ws/src
git clone https://github.com/robustrobotics/flame.git
git clone https://github.com/robustrobotics/flame_ros.git

# Initialize workspace.
cd ..
catkin init

# Install ROS dependencies using rosdep.
rosdep install -iy --from-paths ./src
  1. Install Eigen 3.2 and Sophus using the scripts provided with flame:
# Create a dependencies folder.
mkdir -p flame_ws/dependencies/src

# Checkout Eigen and Sophus into ./dependencies/src and install into ./dependencies.
cd flame_ws
./src/flame/scripts/eigen.sh ./dependencies/src ./dependencies
./src/flame/scripts/sophus.sh ./dependencies/src ./dependencies

# Copy and source environment variable script:
cp ./src/flame/scripts/env.sh ./dependencies/
source ./dependencies/env.sh
  1. Build workspace:
# Build!
catkin build

# Source workspace.
source ./devel/setup.sh

Offline Processing

Two types of offline nodes are provided, one to process video from the EuRoC MAV Dataset and one to process video from the TUM RGB-D SLAM Benchmark.

EuRoC Data

First, download and extract one of the ASL datasets here (the Vicon Room datasets should work well).

Next, update the parameter file in flame_ros/cfg/flame_offline_asl.yaml to point to where you extracted the data:

pose_path: <path_to_dataset>/mav0/state_groundtruth_estimate0
rgb_path: <path_to_dataset>/mav0/cam0

Finally, to process the data launch:

roslaunch flame_ros flame_offline_asl.launch

The mesh should be published on the /flame/mesh topic. To visualize this topic in rviz, consult the the Visualization section.

TUM Data

First, download and extract one of the datasets here (fr3/structure_texture_far or fr3/long_office_household should work well). Use the associate.py script here to associate the pose (groundtruth.txt) and RGB (rgb.txt) files (you can associate the depthmaps as well).

A ROS-compliant camera calibration YAML file will be needed. You can use the one provided in flame_ros/cfg/kinect.yaml, which has the default parameters for the Microsoft Kinect used to collect the TUM datasets.

Next, update the parameter file in flame_ros/cfg/flame_offline_tum.yaml to point to where you extracted the data:

input_file: <path_to_dataset>/groundtruth_rgb.txt
calib_file: <path_to_flame_ros>/cfg/kinect.yaml

Finally, to process the data launch:

roslaunch flame_ros flame_offline_tum.launch

The mesh should be published on the /flame/mesh topic. To visualize this topic in rviz, consult the the Visualization section.

Online Processing

The online nodelet can be launched using flame_nodelet.launch:

roslaunch flame_ros flame_nodelet.launch image:=/image

where /image is your live rectified/undistorted image stream. The frame_id of this topic must correspond to a Right-Down-Forward frame attached to the camera. The tf tree must be complete such that the pose of the camera in the world frame can be resolved by tf.

The mesh should be published on the /flame/mesh topic. To visualize this topic in rviz, consult the the Visualization section.

The flame_nodelet.launch launch file loads the parameters listed in flame_ros/cfg/flame_nodelet.yaml. You may need to update the input/camera_frame_id param for your data. See the Parameters section for more parameter information.

Visualization

You can use the provided configuration file (flame_ros/cfg/flame.rviz) to visualize the output data in rviz. This approach uses a custom rviz plugin to render the /flame/mesh messages. flame_ros can also publish the depth data in other formats (e.g. as a sensor_msgs/PointCloud2 or a sensor_msgs/Image), which can be visualized by enabling the corresponding plugins.

Parameters

There are many parameters that control processing, but only a handful are particularly important:

Performance Tips

For best results use a high framerate (>= 30 Hz) camera with VGA-sized images. Higher resolution images will require more accurate poses. The feature detection window size (features/detection/win_size) and the data scaling term (regularization/nltgv2/data_factor) are the primary knobs for tuning performance. The default parameters should work well in most cases, but you may need to tune for your specific data.

By default, flame_ros will publish several debug images. While helpful to observe during operation, they will slow down the pipeline. Disable them if you are trying to increase throughput.

The usual tips for monocular SLAM/depth estimation systems also apply: