Home

Awesome

VideoQA

This is the implementation of our paper "Video Question Answering via Gradually Refined Attention over Appearance and Motion".

Datasets

For our experiments, we create two VideoQA datasets named MSVD-QA and MSRVTT-QA. Both datasets are based on existing video description datasets. The QA pairs are generated from descriptions using this tool with additional processing steps. The corresponding videos can be found in base datasets which are MSVD and MSR-VTT. For MSVD-QA, youtube_mapping.txt may be needed to build the mapping of video names. The followings are some examples from the datasets.

MSVD-QA MSRVTT-QA

Models

We propose a model with gradually refined attention over appearance and motion in the video to tackle the VideoQA task. The architecture is presented below. Besides, we also compare the proposed model with three baseline models. Details can be found in the paper. model

Code

The code is written in pure python. Tensorflow is chosen to be the deep learning library here. The code uses two implementations of feature extraction networks which are VGG16 and C3D from the community.

Environments

Prerequisits

  1. Clone the repository to your local machine.

    $ git clone https://github.com/xudejing/VideoQA.git
    
  2. Download the VGG16 checkpoint and C3D checkpoint provided in corresponding repositories, then put them in directory util; Download the word embeddings trained over 6B tokens (glove.6B.zip) from GloVe and put the 300d file in directory util.

  3. Install the python dependency packages.

    $ pip install -r requirements.txt
    

Usage

The directory model contains definition of four models. config.py is the place to define the parameters of models and training process.

  1. Preprocess the VideoQA datasets, for example:

    $ python preprocess_msvdqa.py {dataset location}
    
  2. Train, validate and test the models, for example:

    $ python run_gra.py --mode train --gpu 0 --log log/evqa --dataset msvd_qa --config 0
    

    (Note: you can pass -h to get help.)

  3. Visualize the training process using tensorboard, for example:

    $ tensorboard --logdir log --port 8888
    

Citation

If you find this code useful, please cite the following paper:

@inproceedings{xu2017video,
  title={Video Question Answering via Gradually Refined Attention over Appearance and Motion},
  author={Xu, Dejing and Zhao, Zhou and Xiao, Jun and Wu, Fei and Zhang, Hanwang and He, Xiangnan and Zhuang, Yueting},
  booktitle={ACM Multimedia}
  year={2017}
}