Home

Awesome

<div align="center"> <img src="resources/mmtrack-logo.png" width="600"/> </div>

PyPI docs badge codecov license

Documentation: https://mmtracking.readthedocs.io/

Introduction

MMTracking is an open source video perception toolbox based on PyTorch. It is a part of the OpenMMLab project.

The master branch works with PyTorch 1.3 to 1.7.

<div align="left"> <img src="https://user-images.githubusercontent.com/24663779/103343312-c724f480-4ac6-11eb-9c22-b56f1902584e.gif" width="800"/> </div>

Major features

License

This project is released under the Apache 2.0 license.

Changelog

v0.5.0 was released in 04/01/2021. Please refer to changelog.md for details and release history.

Benchmark and model zoo

Results and models are available in the model zoo.

Supported methods of video object detection:

Supported methods of multi object tracking:

Supported methods of single object tracking:

Installation

Please refer to install.md for install instructions.

Get Started

Please see dataset.md and quick_run.md for the basic usage of MMTracking. We also provide usage tutorials.

Contributing

We appreciate all contributions to improve MMTracking. Please refer to CONTRIBUTING.md for the contributing guideline.

Acknowledgement

MMTracking is an open source project that welcome any contribution and feedback. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible as well as standardized toolkit to reimplement existing methods and develop their own new video perception methods.

Citation

If you find this repo useful for your research, please consider citing the paper

@article{cui2021tf,
  title={TF-Blender: Temporal Feature Blender for Video Object Detection},
  author={Cui, Yiming and Yan, Liqi and Cao, Zhiwen and Liu, Dongfang},
  journal={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  year={2021}
}