Home

Awesome

Video Platform for Recognition and Detection in Pytorch

A platform for quick and easy development of deep learning networks for recognition and detection in videos. Includes popular models like C3D and SSD.

Check out our wiki!

Implemented Models and their performance

Recognition

Model ArchitectureDatasetViP Accuracy (%)
I3DHMDB51 (Split 1)72.75
C3DHMDB51 (Split 1)50.14 ± 0.777
C3DUCF101 (Split 1)80.40 ± 0.399

Object Detection

Model ArchitectureDatasetViP Accuracy (%)
SSD300VOC200776.58

Video Object Grounding

Model ArchitectureDatasetViP Accuracy (%)
DVSA (+fw, obj)YC2-BB (Validation)30.09

fw: framewise weighting, obj: object interaction

Citation

Please cite ViP when releasing any work that used this platform: https://arxiv.org/abs/1910.02793

@article{ganesh2019vip,
  title={ViP: Video Platform for PyTorch},
  author={Ganesh, Madan Ravi and Hofesmann, Eric and Louis, Nathan and Corso, Jason},
  journal={arXiv preprint arXiv:1910.02793},
  year={2019}
}

Table of Contents

Configured Datasets

DatasetTask(s)
HMDB51Activity Recognition
UCF101Activity Recognition
ImageNetVIDVideo Object Detection
MSCOCO 2014Object Detection, Keypoints
VOC2007Object Detection, Classification
YC2-BBVideo Object Grounding
DHF1KVideo Saliency Prediction

Models

ModelTask(s)
C3DActivity Recognition
I3DActivity Recognition
SSD300Object Detection
DVSA (+fw, obj)Video Object Grounding

Requirements

Installation

# Set up Python3 virtual environment
virtualenv -p python3.6 --no-site-packages vip
source vip/bin/activate

# Clone ViP repository
git clone https://github.com/MichiganCOG/ViP
cd ViP

# Install requirements and model weights
./install.sh

Quick Start

Run train.py and eval.py to train or test any implemented model. The parameters of every experiment is specified in its config.yaml file.

Use the --cfg_file command line argument to point to a different config yaml file. Additionally, all config parameters can be overriden with a command line argument.

Testing

Run eval.py with the argument --cfg_file pointing to the desired model config yaml file.

Ex: From the root directory of ViP, evaluate the action recognition network C3D on HMDB51

python eval.py --cfg_file models/c3d/config_test.yaml

Training

Run train.py with the argument --cfg_file pointing to the desired model config yaml file.

Ex: From the root directory of ViP, train the action recognition network C3D on HMDB51

python train.py --cfg_file models/c3d/config_train.yaml

Additional examples can be found on our wiki.

Development

New models and datasets can be added without needing to rewrite any training, evaluation, or data loading code.

Add a Model

To add a new model:

  1. Create a new folder ViP/models/custom_model_name
  2. Create a model class in ViP/models/custom_model_name/custom_model_name.py
    • Complete __init__, forward, and (optional) __load_pretrained_weights functions
  3. Add PreprocessTrain and PreprocessEval classes within custom_model_name.py
  4. Create config_train.yaml and config_test.yaml files for the new model

Examples of previously implemented models can be found here.

Additional information can be found on our wiki.

Add a Dataset

To add a new dataset:

  1. Convert annotation data to our JSON format
    • The JSON skeleton templates can be found here
    • Existing scripts for datasets can be found here
  2. Create a dataset class in ViP/datasets/custom_dataset_name.py.
    • Inherit DetectionDataset or RecognitionDataset from ViP/abstract_dataset.py
    • Complete __init__ and __getitem__ functions
    • Example skeleton dataset can be found here

Additional information can be found on our wiki.

FAQ

A detailed FAQ can be found on our wiki.