Awesome
<div align="center"> <img src="resources/mmtrack-logo.png" width="600"/> <div> </div> <div align="center"> <b><font size="5">OpenMMLab website</font></b> <sup> <a href="https://openmmlab.com"> <i><font size="4">HOT</font></i> </a> </sup> <b><font size="5">OpenMMLab platform</font></b> <sup> <a href="https://platform.openmmlab.com"> <i><font size="4">TRY IT OUT</font></i> </a> </sup> </div> <div> </div>📘Documentation | 🛠️Installation | 👀Model Zoo | 🆕Update News | 🤔Reporting Issues
</div> <div align="center">English | 简体中文
</div>Introduction
MMTracking is an open source video perception toolbox by PyTorch. It is a part of OpenMMLab project.
The master branch works with PyTorch1.5+.
<div align="center"> <img src="https://user-images.githubusercontent.com/24663779/103343312-c724f480-4ac6-11eb-9c22-b56f1902584e.gif" width="800"/> </div>Major features
-
The First Unified Video Perception Platform
We are the first open source toolbox that unifies versatile video perception tasks include video object detection, multiple object tracking, single object tracking and video instance segmentation.
-
Modular Design
We decompose the video perception framework into different components and one can easily construct a customized method by combining different modules.
-
Simple, Fast and Strong
Simple: MMTracking interacts with other OpenMMLab projects. It is built upon MMDetection that we can capitalize any detector only through modifying the configs.
Fast: All operations run on GPUs. The training and inference speeds are faster than or comparable to other implementations.
Strong: We reproduce state-of-the-art models and some of them even outperform the official implementations.
What's New
We release MMTracking 1.0.0rc0, the first version of MMTracking 1.x.
Built upon the new training engine, MMTracking 1.x unifies the interfaces of datasets, models, evaluation, and visualization.
We also support more methods in MMTracking 1.x, such as StrongSORT for MOT, Mask2Former for VIS, PrDiMP for SOT.
Please refer to dev-1.x branch for the using of MMTracking 1.x.
Installation
Please refer to install.md for install instructions.
Getting Started
Please see dataset.md and quick_run.md for the basic usage of MMTracking.
A Colab tutorial is provided. You may preview the notebook here or directly run it on Colab.
There are also usage tutorials, such as learning about configs, an example about detailed description of vid config, an example about detailed description of mot config, an example about detailed description of sot config, customizing dataset, customizing data pipeline, customizing vid model, customizing mot model, customizing sot model, customizing runtime settings and useful tools.
Benchmark and model zoo
Results and models are available in the model zoo.
Video Object Detection
Supported Methods
- DFF (CVPR 2017)
- FGFA (ICCV 2017)
- SELSA (ICCV 2019)
- Temporal RoI Align (AAAI 2021)
Supported Datasets
Single Object Tracking
Supported Methods
- SiameseRPN++ (CVPR 2019)
- STARK (ICCV 2021)
- MixFormer (CVPR 2022)
- PrDiMP (CVPR2020) (WIP)
Supported Datasets
Multi-Object Tracking
Supported Methods
- SORT/DeepSORT (ICIP 2016/2017)
- Tracktor (ICCV 2019)
- QDTrack (CVPR 2021)
- ByteTrack (ECCV 2022)
- OC-SORT (arXiv 2022)
Supported Datasets
Video Instance Segmentation
Supported Methods
- MaskTrack R-CNN (ICCV 2019)
Supported Datasets
Contributing
We appreciate all contributions to improve MMTracking. Please refer to CONTRIBUTING.md for the contributing guideline and this discussion for development roadmap.
Acknowledgement
MMTracking is an open source project that welcome any contribution and feedback. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible as well as standardized toolkit to reimplement existing methods and develop their own new video perception methods.
Citation
If you find this project useful in your research, please consider cite:
@misc{mmtrack2020,
title={{MMTracking: OpenMMLab} video perception toolbox and benchmark},
author={MMTracking Contributors},
howpublished = {\url{https://github.com/open-mmlab/mmtracking}},
year={2020}
}
License
This project is released under the Apache 2.0 license.
Projects in OpenMMLab
- MMCV: OpenMMLab foundational library for computer vision.
- MIM: MIM installs OpenMMLab packages.
- MMClassification: OpenMMLab image classification toolbox and benchmark.
- MMDetection: OpenMMLab detection toolbox and benchmark.
- MMDetection3D: OpenMMLab's next-generation platform for general 3D object detection.
- MMYOLO: OpenMMLab YOLO series toolbox and benchmark.
- MMRotate: OpenMMLab rotated object detection toolbox and benchmark.
- MMSegmentation: OpenMMLab semantic segmentation toolbox and benchmark.
- MMOCR: OpenMMLab text detection, recognition and understanding toolbox.
- MMPose: OpenMMLab pose estimation toolbox and benchmark.
- MMHuman3D: OpenMMLab 3D human parametric model toolbox and benchmark.
- MMSelfSup: OpenMMLab self-supervised learning Toolbox and Benchmark.
- MMRazor: OpenMMLab Model Compression Toolbox and Benchmark.
- MMFewShot: OpenMMLab FewShot Learning Toolbox and Benchmark.
- MMAction2: OpenMMLab's next-generation action understanding toolbox and benchmark.
- MMTracking: OpenMMLab video perception toolbox and benchmark.
- MMFlow: OpenMMLab optical flow toolbox and benchmark.
- MMEditing: OpenMMLab image and video editing toolbox.
- MMGeneration: OpenMMLab Generative Model toolbox and benchmark.
- MMDeploy: OpenMMlab deep learning model deployment toolset.