Home

Awesome

Overview

This repo is the implementation of MGSampler. The code is based on mmaction2

Dependencies

Installation:

a. Create a conda virtual environment and activate it.

conda create -n open-mmlab python=3.7 -y
conda activate open-mmlab

b. Install PyTorch and torchvision following the official instructions, e.g.,

conda install pytorch torchvision -c pytorch

Note: Make sure that your compilation CUDA version and runtime CUDA version match. You can check the supported CUDA version for precompiled packages on the PyTorch website.

c. Install mmcv, we recommend you to install the pre-build mmcv as below.

pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/{cu_version}/{torch_version}/index.html

d. Clone the MGSampler repository.

git clone https://github.com/MCG-NJU/MGSampler.git

e. Install build requirements and then install MMAction2.

pip install -r requirements/build.txt
pip install -v -e .  # or "python setup.py develop"

Data Preparation:

Pleasr refer to the default MMAction2 dataset setup to set datasets correctly.

Training

MGSampler is a sampling strategy to guide the model to choose motion-salient frames. It is very easy to be inserted into the original codes. Here we explain three places we have mainly changed in mmaction2.

For training the model, run the following the code(take sthv1 dataset and tsm model as an example):

bash tools/dist_train.sh configs/recognition/tsm/tsm_r50_1x1x8_50e_sthv1_rgb.py 8 --validate

License

See Apache-2.0 License

Acknowledgement

In addition to the MMAction2 codebase, this repo contains modified codes from: