Home

Awesome

RelViT

<p align="center"><img width="540" src="./assets/overview.png"></p>

This repository hosts the code for the paper:

RelViT: Concept-guided Vision Transformer for Visual Relational Reasoning (ICLR 2022)

by Xiaojian Ma, Weili Nie, Zhiding Yu, Huaizu Jiang, Chaowei Xiao, Yuke Zhu and Anima Anandkumar

arXiv | Poster | Slides

News

Abstract

Reasoning about visual relationships is central to how humans interpret the visual world. This task remains challenging for current deep learning algorithms since it requires addressing three key technical problems jointly: 1) identifying object entities and their properties, 2) inferring semantic relations between pairs of entities, and 3) generalizing to novel object-relation combinations, i.e., systematic generalization. In this work, we use vision transformers (ViTs) as our base model for visual reasoning and make better use of concepts defined as object entities and their relations to improve the reasoning ability of ViTs. Specifically, we introduce a novel concept-feature dictionary to allow flexible image feature retrieval at training time with concept keys. This dictionary enables two new concept-guided auxiliary tasks: 1) a global task for promoting relational reasoning, and 2) a local task for facilitating semantic object-centric correspondence learning. To examine the systematic generalization of visual reasoning models, we introduce systematic splits for the standard HICO and GQA benchmarks. We show the resulting model, Concept-guided Vision Transformer (or RelViT for short) significantly outperforms prior approaches on HICO and GQA by 16% and 13% in the original split, and by 43% and 18% in the systematic split. Our ablation analyses also reveal our model's compatibility with multiple ViT variants and robustness to hyper-parameters.

Installation

The code has been tested with Python 3.8, PyTorch 1.11.0 and CUDA 11.6 on Ubuntu 20.04

Data Preparation

Please refer to data preparation

Training

<details><summary>HICO</summary>
bash scripts/train_hico_image.sh configs/train_hico.yaml

Note

</details> <details><summary>GQA</summary>
bash scripts/train_gqa_image.sh configs/train_gqa.yaml

Note

</details>

Testing

<details><summary>HICO</summary>
bash scripts/train_hico_image.sh configs/train_hico.yaml --test_only --test_model <path to best_model.pth>
</details> <details><summary>GQA</summary>
bash scripts/train_gqa_image.sh configs/train_gqa.yaml --test_only --test_model <path to best_model.pth>
</details>

Pre-trained models

tagencoderexperimentresultURL
swin-small-relvitswin_smallGQA (val)61.38link
swin-base-relvitswin_baseGQA (val)65.54link

License

Please check the LICENSE file for both the code and the released pre-trained models. This work may be used non-commercially, meaning for research or evaluation purposes only. For business inquiries, please contact researchinquiries@nvidia.com.

Acknowledgement

The authors have referred the following projects:

SimCLR

DenseCL

EsViT

Swin-Transformer

PVT

HICODet

MCAN

Citation

Please consider citing our paper if you find our work helpful for your research:

@inproceedings{ma2022relvit,
    title={RelViT: Concept-guided Vision Transformer for Visual Relational Reasoning},
    author={Xiaojian Ma and Weili Nie and Zhiding Yu and Huaizu Jiang and Chaowei Xiao and Yuke Zhu and Song-Chun Zhu and Anima Anandkumar},
    booktitle={International Conference on Learning Representations},
    year={2022},
    url={https://openreview.net/forum?id=afoV8W3-IYp}
}