Home

Awesome

<div align="center"> <img src="resources/mmdeploy-logo.png" width="450"/> <div>&nbsp;</div> <div align="center"> <b><font size="5">OpenMMLab website</font></b> <sup> <a href="https://openmmlab.com"> <i><font size="4">HOT</font></i> </a> </sup> &nbsp;&nbsp;&nbsp;&nbsp; <b><font size="5">OpenMMLab platform</font></b> <sup> <a href="https://platform.openmmlab.com"> <i><font size="4">TRY IT OUT</font></i> </a> </sup> </div> <div>&nbsp;</div>

docs badge codecov license issue resolution open issues

English | 简体中文

</div> <div align="center"> <a href="https://openmmlab.medium.com/" style="text-decoration:none;"> <img src="https://user-images.githubusercontent.com/25839884/218352562-cdded397-b0f3-4ca1-b8dd-a60df8dca75b.png" width="3%" alt="" /></a> <img src="https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png" width="3%" alt="" /> <a href="https://discord.gg/raweFPmdzG" style="text-decoration:none;"> <img src="https://user-images.githubusercontent.com/25839884/218347213-c080267f-cbb6-443e-8532-8e1ed9a58ea9.png" width="3%" alt="" /></a> <img src="https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png" width="3%" alt="" /> <a href="https://twitter.com/OpenMMLab" style="text-decoration:none;"> <img src="https://user-images.githubusercontent.com/25839884/218346637-d30c8a0f-3eba-4699-8131-512fb06d46db.png" width="3%" alt="" /></a> <img src="https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png" width="3%" alt="" /> <a href="https://www.youtube.com/openmmlab" style="text-decoration:none;"> <img src="https://user-images.githubusercontent.com/25839884/218346691-ceb2116a-465a-40af-8424-9f30d2348ca9.png" width="3%" alt="" /></a> </div>

Highlights

The MMDeploy 1.x has been released, which is adapted to upstream codebases from OpenMMLab 2.0. Please align the version when using it. The default branch has been switched to main from master. MMDeploy 0.x (master) will be deprecated and new features will only be added to MMDeploy 1.x (main) in future.

mmdeploymmenginemmcvmmdetothers
0.x.y-<=1.x.y<=2.x.y0.x.y
1.x.y0.x.y2.x.y3.x.y1.x.y

deploee offers over 2,300 AI models in ONNX, NCNN, TRT and OpenVINO formats. Featuring a built-in list of real hardware devices, deploee enables users to convert Torch models into any target inference format for profiling purposes.

Introduction

MMDeploy is an open-source deep learning model deployment toolset. It is a part of the OpenMMLab project.

<div align="center"> <img src="resources/introduction.png"> </div>

Main features

Fully support OpenMMLab models

The currently supported codebases and models are as follows, and more will be included in the future

Multiple inference backends are available

The supported Device-Platform-InferenceBackend matrix is presented as following, and more will be compatible.

The benchmark can be found from here

<div style="width: fit-content; margin: auto;"> <table> <tr> <th>Device / <br> Platform</th> <th>Linux</th> <th>Windows</th> <th>macOS</th> <th>Android</th> </tr> <tr> <th>x86_64 <br> CPU</th> <td> <sub><a href="https://github.com/open-mmlab/mmdeploy/actions/workflows/backend-ort.yml"><img src="https://img.shields.io/github/actions/workflow/status/open-mmlab/mmdeploy/backend-ort.yml"></a></sub> <sub>onnxruntime</sub> <br> <sub><a href="https://github.com/open-mmlab/mmdeploy/actions/workflows/backend-pplnn.yml"><img src="https://img.shields.io/github/actions/workflow/status/open-mmlab/mmdeploy/backend-pplnn.yml"></a></sub> <sub>pplnn</sub> <br> <sub><a href="https://github.com/open-mmlab/mmdeploy/actions/workflows/backend-ncnn.yml"><img src="https://img.shields.io/github/actions/workflow/status/open-mmlab/mmdeploy/backend-ncnn.yml"></a></sub> <sub>ncnn</sub> <br> <sub><a href="https://github.com/open-mmlab/mmdeploy/actions/workflows/backend-torchscript.yml"><img src="https://img.shields.io/github/actions/workflow/status/open-mmlab/mmdeploy/backend-torchscript.yml"></a></sub> <sub>LibTorch</sub> <br> <sub><img src="https://img.shields.io/badge/build-no%20status-lightgrey"></sub> <sub>OpenVINO</sub> <br> <sub><img src="https://img.shields.io/badge/build-no%20status-lightgrey"></sub> <sub>TVM</sub> <br> </td> <td> <sub><img src="https://img.shields.io/badge/build-no%20status-lightgrey"></sub> <sub>onnxruntime</sub> <br> <sub><img src="https://img.shields.io/badge/build-no%20status-lightgrey"></sub> <sub>OpenVINO</sub> <br> <sub><img src="https://img.shields.io/badge/build-no%20status-lightgrey"></sub> <sub>ncnn</sub> <br> </td> <td align="center"> - </td> <td align="center"> - </td> </tr> <tr> <th>ARM <br> CPU</th> <td> <sub><a href="https://github.com/open-mmlab/mmdeploy/actions/workflows/build.yml"><img src="https://byob.yarr.is/open-mmlab/mmdeploy/cross_build_aarch64"></a></sub> <sub>ncnn</sub> <br> </td> <td align="center"> - </td> <td align="center"> - </td> <td align="center"> <sub><a href="https://github.com/open-mmlab/mmdeploy/actions/workflows/backend-ncnn.yml"><img src="https://img.shields.io/github/actions/workflow/status/open-mmlab/mmdeploy/backend-ncnn.yml"></a></sub> <sub>ncnn</sub> <br> </td> </tr> <tr> <th>RISC-V</th> <td> <sub><a href="https://github.com/open-mmlab/mmdeploy/actions/workflows/linux-riscv64-gcc.yml"><img src="https://img.shields.io/github/actions/workflow/status/open-mmlab/mmdeploy/linux-riscv64-gcc.yml"></a></sub> <sub>ncnn</sub> <br> </td> <td align="center"> - </td> <td align="center"> - </td> <td align="center"> - </td> </tr> <tr> <th>NVIDIA <br> GPU</th> <td> <sub><a href="https://github.com/open-mmlab/mmdeploy/actions/workflows/build.yml"><img src="https://byob.yarr.is/open-mmlab/mmdeploy/build_cuda113_linux"></a></sub> <sub>onnxruntime</sub> <br> <sub><a href="https://github.com/open-mmlab/mmdeploy/actions/workflows/build.yml"><img src="https://byob.yarr.is/open-mmlab/mmdeploy/build_cuda113_linux"></a></sub> <sub>TensorRT</sub> <br> <sub><img src="https://img.shields.io/badge/build-no%20status-lightgrey"></sub> <sub>LibTorch</sub> <br> <sub><a href="https://github.com/open-mmlab/mmdeploy/actions/workflows/backend-pplnn.yml"><img src="https://img.shields.io/github/actions/workflow/status/open-mmlab/mmdeploy/backend-pplnn.yml"></a></sub> <sub>pplnn</sub> <br> </td> <td> <sub><a href="https://github.com/open-mmlab/mmdeploy/actions/workflows/build.yml"><img src="https://byob.yarr.is/open-mmlab/mmdeploy/build_cuda113_windows"></a></sub> <sub>onnxruntime</sub> <br> <sub><a href="https://github.com/open-mmlab/mmdeploy/actions/workflows/build.yml"><img src="https://byob.yarr.is/open-mmlab/mmdeploy/build_cuda113_windows"></a></sub> <sub>TensorRT</sub> <br> </td> <td align="center"> - </td> <td align="center"> - </td> </tr> <tr> <th>NVIDIA <br> Jetson</th> <td> <sub><img src="https://img.shields.io/badge/build-no%20status-lightgrey"></sub> <sub>TensorRT</sub> <br> </td> <td align="center"> - </td> <td align="center"> - </td> <td align="center"> - </td> </tr> <tr> <th>Huawei <br> ascend310</th> <td> <sub><a href="https://github.com/open-mmlab/mmdeploy/actions/workflows/backend-ascend.yml"><img src="https://img.shields.io/github/actions/workflow/status/open-mmlab/mmdeploy/backend-ascend.yml"></a></sub> <sub>CANN</sub> <br> </td> <td align="center"> - </td> <td align="center"> - </td> <td align="center"> - </td> </tr> <tr> <th>Rockchip</th> <td> <sub><a href="https://github.com/open-mmlab/mmdeploy/actions/workflows/backend-rknn.yml"><img src="https://img.shields.io/github/actions/workflow/status/open-mmlab/mmdeploy/backend-rknn.yml"></a></sub> <sub>RKNN</sub> <br> </td> <td align="center"> - </td> <td align="center"> - </td> <td align="center"> - </td> </tr> <tr> <th>Apple M1</th> <td align="center"> - </td> <td align="center"> - </td> <td> <sub><a href="https://github.com/open-mmlab/mmdeploy/actions/workflows/backend-coreml.yml"><img src="https://img.shields.io/github/actions/workflow/status/open-mmlab/mmdeploy/backend-coreml.yml"></a></sub> <sub>CoreML</sub> <br> </td> <td align="center"> - </td> </tr> <tr> <th>Adreno <br> GPU</th> <td align="center"> - </td> <td align="center"> - </td> <td align="center"> - </td> <td> <sub><a href="https://github.com/open-mmlab/mmdeploy/actions/workflows/backend-snpe.yml"><img src="https://img.shields.io/github/actions/workflow/status/open-mmlab/mmdeploy/backend-snpe.yml"></a></sub> <sub>SNPE</sub> <br> <sub><a href="https://github.com/open-mmlab/mmdeploy/actions/workflows/backend-ncnn.yml"><img src="https://img.shields.io/github/actions/workflow/status/open-mmlab/mmdeploy/backend-ncnn.yml"></a></sub> <sub>ncnn</sub> <br> </td> </tr> <tr> <th>Hexagon <br> DSP</th> <td align="center"> - </td> <td align="center"> - </td> <td align="center"> - </td> <td> <sub><a href="https://github.com/open-mmlab/mmdeploy/actions/workflows/backend-snpe.yml"><img src="https://img.shields.io/github/actions/workflow/status/open-mmlab/mmdeploy/backend-snpe.yml"></a></sub> <sub>SNPE</sub> <br> </td> </tr> </table> </div>

Efficient and scalable C/C++ SDK Framework

All kinds of modules in the SDK can be extended, such as Transform for image processing, Net for Neural Network inference, Module for postprocessing and so on

Documentation

Please read getting_started for the basic usage of MMDeploy. We also provide tutoials about:

Benchmark and Model zoo

You can find the supported models from here and their performance in the benchmark.

Contributing

We appreciate all contributions to MMDeploy. Please refer to CONTRIBUTING.md for the contributing guideline.

Acknowledgement

We would like to sincerely thank the following teams for their contributions to MMDeploy:

Citation

If you find this project useful in your research, please consider citing:

@misc{=mmdeploy,
    title={OpenMMLab's Model Deployment Toolbox.},
    author={MMDeploy Contributors},
    howpublished = {\url{https://github.com/open-mmlab/mmdeploy}},
    year={2021}
}

License

This project is released under the Apache 2.0 license.

Projects in OpenMMLab