Home

Awesome

EdgeAI-MMDetection3D

This repository is an extension of the popular mmdetection3d open source repository for 3d object detection. While mmdetection3d focuses on a wide variety of models, typically at high complexity, we focus on models that are optimized for speed and accuracy so that they run efficiently on embedded devices. For this purpose, we have added a set of embedded friendly model configurations and scripts.

This repository also supports Quantization Aware Training (QAT).

<hr>

Notes

Environment

We have tested this on Ubuntu 22.04 OS and pyenv Python environment manager. Here are the setup instructions.

Make sure that you are using bash shell. If it is not bash shell, change it to bash. Verify it by typing:

echo ${SHELL}

Install system packages

sudo apt update
sudo apt install build-essential curl libbz2-dev libffi-dev liblzma-dev libncursesw5-dev libreadline-dev libsqlite3-dev libssl-dev libxml2-dev libxmlsec1-dev llvm make tk-dev wget xz-utils zlib1g-dev

Install pyenv using the following command.

curl -L https://github.com/pyenv/pyenv-installer/raw/master/bin/pyenv-installer | bash

echo '# pyenv settings ' >> ${HOME}/.bashrc
echo 'command -v pyenv >/dev/null || export PATH=":${HOME}/.pyenv/bin:$PATH"' >> ${HOME}/.bashrc
echo 'eval "$(pyenv init -)"' >> ${HOME}/.bashrc
echo 'eval "$(pyenv virtualenv-init -)"' >> ${HOME}/.bashrc
echo '' >> ${HOME}/.bashrc

exec ${SHELL}

From SDK/TIDL version 9.0, the Python version required is 3.10. Create a Python 3.10 environment if you don't have it and activate it before following the rest of the instructions.

pyenv install 3.10
pyenv virtualenv 3.10 mmdet3d
pyenv activate mmdet3d
pip install --upgrade pip setuptools

Note: Prior to SDK/TIDL version 9.0, the Python version required was 3.6

Activation of Python environment - this activation step needs to be done everytime one starts a new terminal or shell. (Alternately, this also can be written to the .bashrc, so that this will be the default penv environment).

pyenv activate mmdet3d

Installation Instructions

After cloning this repository, install it as a Python package by running:

./setup.sh

Dataset Preperation

Prepare dataset as per original mmdetection3d documentation dataset preperation.

Note: Currently only KITTI dataset with pointPillars network is supported. For KITTI dataset optional ground plane data can be downloaded from KITTI Plane data. For preparing the KITTI data with ground plane, please refer the mmdetection3d external link dataset preperation external Link and use below command from there

Steps for Kitti Dataset preperation

# Creating dataset folders
cd <edgeai-mmdetection3d>
mkdir ./data/kitti/ && mkdir ./data/kitti/ImageSets

# Download data split
wget -c https://raw.githubusercontent.com/traveller59/second.pytorch/master/second/data/ImageSets/test.txt --no-check-certificate --content-disposition -O ./data/kitti/ImageSets/test.txt
wget -c  https://raw.githubusercontent.com/traveller59/second.pytorch/master/second/data/ImageSets/train.txt --no-check-certificate --content-disposition -O ./data/kitti/ImageSets/train.txt
wget -c  https://raw.githubusercontent.com/traveller59/second.pytorch/master/second/data/ImageSets/val.txt --no-check-certificate --content-disposition -O ./data/kitti/ImageSets/val.txt
wget -c  https://raw.githubusercontent.com/traveller59/second.pytorch/master/second/data/ImageSets/trainval.txt --no-check-certificate --content-disposition -O ./data/kitti/ImageSets/trainval.txt

# Preparing the dataset
cd <edgeai-mmdetection3d>
python tools/create_data.py kitti --root-path ./data/kitti --out-dir ./data/kitti --extra-tag kitti --with-plane

Steps for Semantic segmented painted Kitti Dataset preperation

PointPainting : It is simple fusion algorithm for 3d object detection. This repository supports data preperation and training for points painting. Please refer https://arxiv.org/abs/1911.10150 for more details. Data preperation for points painting network is mentioned below

It is expected that previous step of normal KITTI data preperation is already done. Also required to do one small change in mmseg installation package. Please change the function "simple_test" in the file ~/anaconda3/envs/<conda env name>/lib/python3.7/site-packages/mmseg/models/segmentors/encoder_decoder.py as shown below to return the segmentation output just after CNN network and before argmax. Please note that only return tensor is changed.

def simple_test(self, img, img_meta, rescale=True):
    """Simple test with single image."""
    seg_logit = self.inference(img, img_meta, rescale)
    seg_pred = seg_logit.argmax(dim=1)
    if torch.onnx.is_in_onnx_export():
        # our inference backend only support 4D output
        seg_pred = seg_pred.unsqueeze(0)
        return seg_pred
    seg_pred = seg_pred.cpu().numpy()
    # unravel batch dim
    seg_pred = list(seg_pred)
    return seg_logit # changed from "return seg_pred"
cd <edgeai-mmdetection3d>/tools/data_converter

python kitti_painting.py

Get Started

Please see Usage for training and testing with this repository.

3D Object Detection Model Zoo

Complexity and Accuracy report of several trained models is available at the 3D Detection Model Zoo

Quantization

This tutorial explains more about quantization and how to do Quantization Aware Training (QAT) of detection models.

ONNX & Prototxt Export

Export of ONNX model (.onnx) and additional meta information (.prototxt) is supported. The .prototxt contains meta information specified by TIDL for object detectors.

The export of meta information is now supported for pointPillars detectors.

For more information please see Usage

Advanced documentation

Kindly take time to read through the documentation of the original mmdetection3d before attempting to use extensions added to this repository.

Acknowledgement

This is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedbacks.

We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to train existing detectors and also to develop their own new detectors.

License

Please see LICENSE file of this repository.

Model deployment

Now MMDeploy has supported some MMDetection3D model deployment. Please refer to model_deployment.md for more details.

Citation

This package/toolbox is an extension of mmdetection3d (https://github.com/open-mmlab/mmdetection3d). If you use this repository or benchmark in your research or work, please cite the following:

@article{EdgeAI-MMDetection3D,
  title   = {{EdgeAI-MMDetection3D}: An Extension To Open MMLab Detection Toolbox and Benchmark},
  author  = {Texas Instruments EdgeAI Development Team, edgeai-devkit@list.ti.com},
  journal = {https://github.com/TexasInstruments/edgeai},
  year={2022}
}
@misc{mmdet3d2020,
    title={{MMDetection3D: OpenMMLab} next-generation platform for general {3D} object detection},
    author={MMDetection3D Contributors},
    howpublished = {\url{https://github.com/open-mmlab/mmdetection3d}},
    year={2020}
}

References

[1] MMDetection3d: https://github.com/open-mmlab/mmdetection3d [2] PointPillars: https://arxiv.org/abs/1812.05784 [3] PointPainting: https://arxiv.org/abs/1911.10150

<hr><hr> Original documentation of mmdetection3d <hr>

<div align="center"> <img src="resources/mmdet3d-logo.png" width="600"/> <div>&nbsp;</div> <div align="center"> <b><font size="5">OpenMMLab website</font></b> <sup> <a href="https://openmmlab.com"> <i><font size="4">HOT</font></i> </a> </sup> &nbsp;&nbsp;&nbsp;&nbsp; <b><font size="5">OpenMMLab platform</font></b> <sup> <a href="https://platform.openmmlab.com"> <i><font size="4">TRY IT OUT</font></i> </a> </sup> </div> <div>&nbsp;</div> </div>

docs badge codecov license

News: We released the codebase v1.0.0rc4.

Note: We are going through large refactoring to provide simpler and more unified usage of many modules.

The compatibilities of models are broken due to the unification and simplification of coordinate systems. For now, most models are benchmarked with similar performance, though few models are still being benchmarked. In this version, we update some of the model checkpoints after the refactor of coordinate systems. See more details in the Changelog.

In the nuScenes 3D detection challenge of the 5th AI Driving Olympics in NeurIPS 2020, we obtained the best PKL award and the second runner-up by multi-modality entry, and the best vision-only results.

Code and models for the best vision-only method, FCOS3D, have been released. Please stay tuned for MoCa.

MMDeploy has supported some MMDetection3d model deployment.

Documentation: https://mmdetection3d.readthedocs.io/

Introduction

English | 简体中文

The master branch works with PyTorch 1.3+.

MMDetection3D is an open source object detection toolbox based on PyTorch, towards the next-generation platform for general 3D detection. It is a part of the OpenMMLab project developed by MMLab.

demo image

Major features

Like MMDetection and MMCV, MMDetection3D can also be used as a library to support different projects on top of it.

License

This project is released under the Apache 2.0 license.

Changelog

v1.0.0rc4 was released in 8/8/2022.

Please refer to changelog.md for details and release history.

Benchmark and model zoo

Results and models are available in the model zoo.

<div align="center"> <b>Components</b> </div> <table align="center"> <tbody> <tr align="center" valign="bottom"> <td> <b>Backbones</b> </td> <td> <b>Heads</b> </td> <td> <b>Features</b> </td> </tr> <tr valign="top"> <td> <ul> <li><a href="configs/pointnet2">PointNet (CVPR'2017)</a></li> <li><a href="configs/pointnet2">PointNet++ (NeurIPS'2017)</a></li> <li><a href="configs/regnet">RegNet (CVPR'2020)</a></li> <li><a href="configs/dgcnn">DGCNN (TOG'2019)</a></li> <li>DLA (CVPR'2018)</li> <li>MinkResNet (CVPR'2019)</li> </ul> </td> <td> <ul> <li><a href="configs/free_anchor">FreeAnchor (NeurIPS'2019)</a></li> </ul> </td> <td> <ul> <li><a href="configs/dynamic_voxelization">Dynamic Voxelization (CoRL'2019)</a></li> </ul> </td> </tr> </tbody> </table> <div align="center"> <b>Architectures</b> </div> <table align="center"> <tbody> <tr align="center" valign="middle"> <td> <b>3D Object Detection</b> </td> <td> <b>Monocular 3D Object Detection</b> </td> <td> <b>Multi-modal 3D Object Detection</b> </td> <td> <b>3D Semantic Segmentation</b> </td> </tr> <tr valign="top"> <td> <li><b>Outdoor</b></li> <ul> <li><a href="configs/second">SECOND (Sensor'2018)</a></li> <li><a href="configs/pointpillars">PointPillars (CVPR'2019)</a></li> <li><a href="configs/ssn">SSN (ECCV'2020)</a></li> <li><a href="configs/3dssd">3DSSD (CVPR'2020)</a></li> <li><a href="configs/sassd">SA-SSD (CVPR'2020)</a></li> <li><a href="configs/point_rcnn">PointRCNN (CVPR'2019)</a></li> <li><a href="configs/parta2">Part-A2 (TPAMI'2020)</a></li> <li><a href="configs/centerpoint">CenterPoint (CVPR'2021)</a></li> </ul> <li><b>Indoor</b></li> <ul> <li><a href="configs/votenet">VoteNet (ICCV'2019)</a></li> <li><a href="configs/h3dnet">H3DNet (ECCV'2020)</a></li> <li><a href="configs/groupfree3d">Group-Free-3D (ICCV'2021)</a></li> <li><a href="configs/fcaf3d">FCAF3D (ECCV'2022)</a></li> </ul> </td> <td> <li><b>Outdoor</b></li> <ul> <li><a href="configs/imvoxelnet">ImVoxelNet (WACV'2022)</a></li> <li><a href="configs/smoke">SMOKE (CVPRW'2020)</a></li> <li><a href="configs/fcos3d">FCOS3D (ICCVW'2021)</a></li> <li><a href="configs/pgd">PGD (CoRL'2021)</a></li> <li><a href="configs/monoflex">MonoFlex (CVPR'2021)</a></li> </ul> </td> <td> <li><b>Outdoor</b></li> <ul> <li><a href="configs/mvxnet">MVXNet (ICRA'2019)</a></li> </ul> <li><b>Indoor</b></li> <ul> <li><a href="configs/imvotenet">ImVoteNet (CVPR'2020)</a></li> </ul> </td> <td> <li><b>Indoor</b></li> <ul> <li><a href="configs/pointnet2">PointNet++ (NeurIPS'2017)</a></li> <li><a href="configs/paconv">PAConv (CVPR'2021)</a></li> <li><a href="configs/dgcnn">DGCNN (TOG'2019)</a></li> </ul> </ul> </td> </tr> </td> </tr> </tbody> </table>
ResNetPointNet++SECONDDGCNNRegNetXDLAMinkResNet
SECOND
PointPillars
FreeAnchor
VoteNet
H3DNet
3DSSD
Part-A2
MVXNet
CenterPoint
SSN
ImVoteNet
FCOS3D
PointNet++
Group-Free-3D
ImVoxelNet
PAConv
DGCNN
SMOKE
PGD
MonoFlex
SA-SSD
FCAF3D

Note: All the about 300+ models, methods of 40+ papers in 2D detection supported by MMDetection can be trained or used in this codebase.

Installation

Please refer to getting_started.md for installation.

Get Started

Please see getting_started.md for the basic usage of MMDetection3D. We provide guidance for quick run with existing dataset and with customized dataset for beginners. There are also tutorials for learning configuration systems, adding new dataset, designing data pipeline, customizing models, customizing runtime settings and Waymo dataset.

Please refer to FAQ for frequently asked questions. When updating the version of MMDetection3D, please also check the compatibility doc to be aware of the BC-breaking updates introduced in each version.

Model deployment

Now MMDeploy has supported some MMDetection3D model deployment. Please refer to model_deployment.md for more details.

Citation

If you find this project useful in your research, please consider cite:

@misc{mmdet3d2020,
    title={{MMDetection3D: OpenMMLab} next-generation platform for general {3D} object detection},
    author={MMDetection3D Contributors},
    howpublished = {\url{https://github.com/open-mmlab/mmdetection3d}},
    year={2020}
}

Contributing

We appreciate all contributions to improve MMDetection3D. Please refer to CONTRIBUTING.md for the contributing guideline.

Acknowledgement

MMDetection3D is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors as well as users who give valuable feedbacks. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new 3D detectors.

Projects in OpenMMLab