Home

Awesome

<p align="center"> <img width="130%" src="./.github/media/logo_horizontal_color.png" /> </p> <p align="center"> <a href="https://github.com/facebookresearch/pytorchvideo/blob/main/LICENSE"> <img src="https://img.shields.io/pypi/l/pytorchvideo" alt="CircleCI" /> </a> <a href="https://pypi.org/project/pytorchvideo/"> <img src="https://img.shields.io/pypi/v/pytorchvideo?color=blue&label=release" alt="CircleCI" /> </a> <a href="https://circleci.com/gh/facebookresearch/pytorchvideo/tree/main"> <img src="https://img.shields.io/circleci/build/github/facebookresearch/pytorchvideo/main?token=efdf3ff5b6f6acf44f4af39b683dea31d40e5901" alt="Coverage" /> </a> <a href="https://codecov.io/gh/facebookresearch/pytorchvideo/branch/main"> <img src="https://codecov.io/gh/facebookresearch/pytorchvideo/branch/main/graph/badge.svg?token=OSZSI6JU31"/> </a> </a> <a href="https://pytorchvideo.slack.com/join/shared_invite/zt-wx8xsblj-eAfx6wox9tSuFrAm8KaiPg#/shared-invite/email"> <img src="http://img.shields.io/static/v1?label=Join%20us%20on&message=%23pytorchvideo&labelColor=%234A154B&logo=slack"/> </a> <p align="center"> <i> A deep learning library for video understanding research.</i> </p> <p align="center"> <i>Check the <a href="https://pytorchvideo.org/">website</a> for more information.</i> </p> </p>
<img src="https://media.giphy.com/media/clMMFBLywc4Sa3KXDb/giphy.gif" width="200"><img src=".github/media/ava_slowfast.gif" width="1300">
A PyTorchVideo-accelerated X3D model running on a Samsung Galaxy S10 phone. The model runs ~8x faster than real time, requiring roughly 130 ms to process one second of video.A PyTorchVideo-based SlowFast model performing video action detection.

X3D model Web Demo

Integrated to Huggingface Spaces with Gradio. See demo: Hugging Face Spaces

Introduction

PyTorchVideo is a deeplearning library with a focus on video understanding work. PytorchVideo provides reusable, modular and efficient components needed to accelerate the video understanding research. PyTorchVideo is developed using PyTorch and supports different deeplearning video components like video models, video datasets, and video-specific transforms.

Key features include:

Updates

Installation

Install PyTorchVideo inside a conda environment(Python >=3.7) with

pip install pytorchvideo

For detailed instructions please refer to INSTALL.md.

License

PyTorchVideo is released under the Apache 2.0 License.

Tutorials

Get started with PyTorchVideo by trying out one of our tutorials or by running examples in the tutorials folder.

Model Zoo and Baselines

We provide a large set of baseline results and trained models available for download in the PyTorchVideo Model Zoo.

Contributors

Here is the growing list of PyTorchVideo contributors in alphabetical order (let us know if you would like to be added): Aaron Adcock, Amy Bearman, Bernard Nguyen, Bo Xiong, Chengyuan Yan, Christoph Feichtenhofer, Dave Schnizlein, Haoqi Fan, Heng Wang, Jackson Hamburger, Jitendra Malik, Kalyan Vasudev Alwala, Matt Feiszli, Nikhila Ravi, Ross Girshick, Tullie Murrell, Wan-Yen Lo, Weiyao Wang, Xiaowen Lin, Yanghao Li, Yilei Li, Zhengxing Chen, Zhicheng Yan.

Development

We welcome new contributions to PyTorchVideo and we will be actively maintaining this library! Please refer to CONTRIBUTING.md for full instructions on how to run the code, tests and linter, and submit your pull requests.

Citing PyTorchVideo

If you find PyTorchVideo useful in your work, please use the following BibTeX entry for citation.

@inproceedings{fan2021pytorchvideo,
    author =       {Haoqi Fan and Tullie Murrell and Heng Wang and Kalyan Vasudev Alwala and Yanghao Li and Yilei Li and Bo Xiong and Nikhila Ravi and Meng Li and Haichuan Yang and  Jitendra Malik and Ross Girshick and Matt Feiszli and Aaron Adcock and Wan-Yen Lo and Christoph Feichtenhofer},
    title = {{PyTorchVideo}: A Deep Learning Library for Video Understanding},
    booktitle = {Proceedings of the 29th ACM International Conference on Multimedia},
    year = {2021},
    note = {\url{https://pytorchvideo.org/}},
}