Home

Awesome

MotionSqueeze: Neural Motion Feature Learning for Video Understanding


<img src="/img/MS_module.png" width="100%" height="100%" alt="MS_module"></img>


This is the implementation of the paper "MotionSqueeze: Neural Motion Feature Learning for Video Understanding" by H.Kwon, M.Kim, S.Kwak, and M.Cho. For more information, checkout the project website and the paper on arXiv.

Environment:

Clone this repo

git clone https://github.com/arunos728/MotionSqueeze.git

Anaconda environment setting

cd MotionSqueeze
conda env create -f environment.yml
conda activate MS

Installing Correlation sampler

cd Pytorch-Correlation-extension
python setup.py install

Please check this repo for the detailed instructions.

Running

    ./scripts/train_TSM_Something_v1.sh local
    ./scripts/train_TSM_Kinetics.sh local
    ./scripts/test_TSM_Something_v1.sh local
    ./scripts/test_TSM_Kinetics.sh local

Citation

If you use this code or ideas from the paper for your research, please cite our paper:

@inproceedings{kwon2020motionsqueeze,
title={MotionSqueeze: Neural Motion Feature Learning for Video Understanding},	  
author={Heeseung Kwon and Manjin Kim and Suha Kwak and Minsu Cho},     
booktitle={ECCV},
year={2020}
}

Contact

Heeseung Kwon(https://aruno@postech.ac.kr), Manjin Kim(https://mandos@postech.ac.kr)

Questions can also be left as issues in the repository. We will be happy to answer them.