Home

Awesome

Structured Attentions for Visual Question Answering

The repository contains the majority of the code to reproduce the experimental results of the paper Structured Attentions for Visual Question Answering on the VQA-1.0 and VQA-2.0 dataset. Currently only the accelerated version of Mean Field is provided, which is used in the VQA 2.0 challenge.

framework

<div align=center> The framework of the proposed network. </div>

Prerequisites

To reproduce the experimental results,

Training from scratch

Set the arguments and run train_VQA.py.

Pretrained models

The best single model accuracy on test-dev of VQA-1.0 and VQA-2.0 with skip-thought vector initialization and Visual Genome training data are 67.19 and 64.78 respectively. Here is the model on VQA-2.0.

Citation

If you found this repository helpful, you could cite

@article{chen2017sva,
  title={Structured Attentions for Visual Question Answering},
  author={Chen, Zhu and Yanpeng, Zhao and Shuaiyi, Huang and Kewei, Tu and Yi, Ma},
  journal={IEEE International Conference on Computer Vision (ICCV)},
  year={2017},
}

Licence

This code is distributed under MIT LICENSE.