Home

Awesome

OpenTAD: An Open-Source Temporal Action Detection Toolbox.

<p align="left"> <!-- <a href="https://arxiv.org/abs/xxx.xxx" alt="arXiv"> --> <!-- <img src="https://img.shields.io/badge/arXiv-xxx.xxx-b31b1b.svg?style=flat" /></a> --> <a href="https://github.com/sming256/opentad/blob/main/LICENSE" alt="license"> <img src="https://img.shields.io/badge/License-Apache_2.0-blue.svg" /></a> <a href="https://github.com/sming256/OpenTAD/issues" alt="docs"> <img src="https://img.shields.io/github/issues-raw/sming256/OpenTAD?color=%23FF9600" /></a> <a href="https://img.shields.io/github/stars/sming256/opentad" alt="arXiv"> <img src="https://img.shields.io/github/stars/sming256/opentad" /></a> </p>

OpenTAD is an open-source temporal action detection (TAD) toolbox based on PyTorch.

πŸ₯³ What's New

πŸ“– Major Features

🌟 Model Zoo

<table align="center"> <tbody> <tr align="center" valign="bottom"> <td> <b>One Stage</b> </td> <td> <b>Two Stage</b> </td> <td> <b>DETR</b> </td> <td> <b>End-to-End Training</b> </td> </tr> <tr valign="top"> <td> <ul> <li><a href="configs/actionformer">ActionFormer (ECCV'22)</a></li> <li><a href="configs/tridet">TriDet (CVPR'23)</a></li> <li><a href="configs/temporalmaxer">TemporalMaxer (arXiv'23)</a></li> <li><a href="configs/videomambasuite">VideoMambaSuite (arXiv'24)</a></li> <li><a href="configs/dyfadet">DyFADet (ECCV'24)</a></li> <li><a href="configs/causaltad">CausalTAD (arXiv'24)</a></li> </ul> </td> <td> <ul> <li><a href="configs/bmn">BMN (ICCV'19)</a></li> <li><a href="configs/gtad">GTAD (CVPR'20)</a></li> <li><a href="configs/tsi">TSI (ACCV'20)</a></li> <li><a href="configs/vsgn">VSGN (ICCV'21)</a></li> </ul> </td> <td> <ul> <li><a href="configs/tadtr">TadTR (TIP'22)</a></li> </ul> </td> <td> <ul> <li><a href="configs/afsd">AFSD (CVPR'21)</a></li> <li><a href="configs/tadtr">E2E-TAD (CVPR'22)</a></li> <li><a href="configs/etad">ETAD (CVPRW'23)</a></li> <li><a href="configs/re2tal">Re2TAL (CVPR'23)</a></li> <li><a href="configs/adatad">AdaTAD (CVPR'24)</a></li> </ul> </td> </tr> </td> </tr> </tbody> </table>

The detailed configs, results, and pretrained models of each method can be found in above folders.

πŸ› οΈ Installation

Please refer to install.md for installation.

πŸ“ Data Preparation

Please refer to data.md for data preparation.

πŸš€ Usage

Please refer to usage.md for details of training and evaluation scripts.

πŸ“„ Updates

Please refer to changelog.md for update details.

🀝 Roadmap

All the things that need to be done in the future is in roadmap.md.

πŸ–ŠοΈ Citation

[Acknowledgement] This repo is inspired by OpenMMLab project, and we give our thanks to their contributors.

If you think this repo is helpful, please cite us:

@misc{2024opentad,
    title={OpenTAD: An Open-Source Toolbox for Temporal Action Detection},
    author={Shuming Liu, Chen Zhao, Fatimah Zohra, Mattia Soldan, Carlos Hinojosa, Alejandro Pardo, Anthony Cioppa, Lama Alssum, Mengmeng Xu, Merey Ramazanova, Juan LeΓ³n AlcΓ‘zar, Silvio Giancola, Bernard Ghanem},
    howpublished = {\url{https://github.com/sming256/opentad}},
    year={2024}
}

If you have any questions, please contact: shuming.liu@kaust.edu.sa.