Home

Awesome

Motion R-CNN

This repository contains the TensorFlow prototype implementation of my bachelor thesis Motion R-CNN: Instance-level 3D Motion Estimation with Region-based CNNs.

In addition to the functionality provided by the TensorFlow Object Detection API (at the time of writing), the code supports:

Note that the code only supports training on the Virtual KITTI dataset, but it is easy to adapt it to other datasets. Motion prediction and frame pair input is fully optional and the code can be used as a Mask R-CNN implementation with single image input. Support for cityscapes is implemented, but using the records created with create_citiscapes_tf_record.py may require adapting the data_decoder or the record writing as the record interface changed.

License

Motion R-CNN is released under the MIT License (refer to the LICENSE file for details).

Usage

Requirements

Setup

Note that <data_parent_dir> is the directory containing the vkitti directory.

Training & evaluating

Use

to train and evaluate a model with camera and instance motion prediction. You can adapt the configurations found in data/configs/. For a description of the configuration parameters, see object_detection/protos.

Navigating the code

The following files were added or modified from the original Object Detection API code

Additionally, some proto params and builders were modified, and extensions were made to eval_util.py, eval.py, evaluator.py, train.py, trainer.py.

The following tests were added or modified:

Acknowledgments

This repository is based on the TensorFlow Object Detection API.