Home

Awesome

ICCV2023 - IntentQA: Context-aware Video Intent Reasoning

Introduction

The project is described in our paper IntentQA: Context-aware Video Intent Reasoning (ICCV2023, Oral).

Among the recent flourishing studies on cross-modal vision-language understanding, video question answering (VideoQA) is one of the most prominent to support interactive AI with the ability to understand and communicate dynamic visual scenarios via natural languages. Despite its popularity, VideoQA is still quite challenging, because it demands the models to comprehensively understand the videos to correctly answer questions, which include not only factual but also inferential ones. The former directly asks about the visual facts (e.g., humans, objects, actions, etc.), while the latter (inference VideoQA) requires logical reasoning of latent variables (e.g., the spatial, temporal and causal relationships among entities, mental states, etc.) beyond observed visual facts . The future trend for AI is to study inference VideoQA beyond factoid VideoQA , requiring more reasoning ability beyond mere recognition. In this paper, we propose a new task called IntentQA, i.e., a special kind of inference VideoQA that focuses on intent reasoning.

img

Dataset

Please download the pre-computed features and original videos from here,

There are 3 folders:

Please download the QA annotations from here. There are 3 files (train.csv,val.csv,test.csv ):

In each annotation file, the initial columns follow the same format as in NExT-QA. Building upon the NExT-QA foundation, we have introduced additional annotations, adding extra columns to the dataset.

Results

ModelText Rep.CWCHTP&TNTotalResult File
EVQAGloVe25.9234.5425.5227.27
CoMemGloVe30.0028.6928.9529.52
HGAGloVe32.0030.6431.0531.54
HMEGloVe34.4034.2629.1433.08
HQGAGloVe33.2034.2636.5734.21
CoMemBERT47.6854.8739.0546.77
HGABERT44.8850.9739.6244.61
HMEBERT46.0854.3240.7646.16
HQGABERT48.2454.3241.7147.66
VGTBERT51.4455.9947.6251.27
Blind GPTBERT52.1661.2843.4351.55Here
Ours w/o GPTBERT55.2861.5647.8154.50Here
OursBERT58.4065.4650.4857.64Here
Human-77.7680.2279.0578.49Here

Demo

Here is a demo that briefly summarizes our work.

Install

conda create -n intentqa python==3.8.8
conda activate intentqa
git clone https://github.com/sail-sg/VGT.git
pip install -r requirements.txt

Inference and Evaluation

./shell/intentqa_test.sh 0
python eval_intentqa.py --folder your_work_dir --mode test

Using GPT

Add the following to intentqa_test.sh:

--GPT_result='../data/save_models/intentqa/Your_GPT_result_DIR/test-res.json'

You can also use my result file in the Results section.

Citation

This repository is developed based on VGT. We sincerely thank them for their outstanding work.

@InProceedings{Li_2023_ICCV,
    author    = {Li, Jiapeng and Wei, Ping and Han, Wenjuan and Fan, Lifeng},
    title     = {IntentQA: Context-aware Video Intent Reasoning},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2023},
    pages     = {11963-11974}
}
@inproceedings{xiao2022video,
  title={Video Graph Transformer for Video Question Answering},
  author={Xiao, Junbin and Zhou, Pan and Chua, Tat-Seng and Yan, Shuicheng},
  booktitle={European Conference on Computer Vision},
  pages={39--58},
  year={2022},
  organization={Springer}
}