Home

Awesome

Intentonomy: a Dataset and Study towards Human Intent Understanding

Project page for the paper:

Intentonomy: a Dataset and Study towards Human Intent Understanding CVPR 2021 (oral)

teaser

1️⃣ Intentonomy Dataset

ontology

Download

we introduce a human intent dataset, Intentonomy, containing 14K images that are manually annotated with 28 intent categories, organized in a hierarchy by psychology experts. See DATA.md for how to download the dataset.

Annotation Demo

We employed a "game with a purpose" approach to acquire the intent annotation from Amazon Mechanical Turks. See this link for furthur demonstration. See Appendix C in our paper for details.

2️⃣ From Image Content to Human Intent

To investigate the intangible and subtle connection between visual content and intent, we present a systematic study to evaluate how the performance of intent recognition changes as a a function of (a) the amount of object/context information; (b) the properties of object/context, including geometry, resolution and texture. Our study suggests that:

  1. different intent categories rely on different sets of objects and scenes for recognition;
  2. however, for some classes that we observed to have large intra-class variations, visual content provides negligible boost to the performance.
  3. our study also reveals that attending to relevant object and scene classes brings beneficial effects for recognizing intent.

3️⃣ Intent Recognition Baseline

We introduce a framework with the help of weaklysupervised localization and an auxiliary hashtag modality that is able to narrow the gap between human and machine understanding of images. We provide the results of the our baseline model below.

Localization loss implementation

We provide the implementation of the proposed localization loss in loc_loss.py, where the default parameters are the one we used in the paper. Download the masks for our images (518M) here and update the MASK_ROOT in the script.

Note that you will need cv2 and pycocotools libraries to use Localizationloss. Other notes are included in the loc_loss.py.

Identifying intent classes

We break down the intent classes into different subsets based on:

  1. content dependency: i.e., object-dependent (O-classes), context-dependent (C-classes), and Others which depends on both foreground and background information;
  2. difficulty: it measures how much the VISUAL outperforms achieves than the RANDOM results (“easy”, “medium” and “hard”).

See Appendix A in our paper for details.

scp menglin@10.100.115.133:/checkpoints/menglin/h2/checkpoint/menglin/projects/2020intent/coco_maskrcnn.json coco_maskrcnn.json

Baseline results

Validation set results:

Macro F1Micro F1Samples F1
VISUAL23.03 $\pm$ 0.7931.36 $\pm$ 1.1629.91 $\pm$ 1.73
VISUAL + $L_{loc}$24.42 $\pm$ 0.9532.87 $\pm$ 1.1332.46 $\pm$ 1.18
VISUAL + $L_{loc}$ + HT25.07 $\pm$ 0.5232.94 $\pm$ 1.1633.61 $\pm$ 0.92

Test set results:

Macro F1Micro F1Samples F1
VISUAL22.77 $\pm$ 0.5930.23 $\pm$ 0.7328.45 $\pm$ 1.71
VISUAL + $L_{loc}$24.37 $\pm$ 0.6532.07 $\pm$ 0.8430.91 $\pm$ 1.27
VISUAL + $L_{loc}$ + HT23.98 $\pm$ 0.8531.28 $\pm$ 0.3631.39 $\pm$ 0.78

Subsets results on validation set

By content dependency:

objectcontextother
VISUAL25.58 $\pm$ 2.5130.16 $\pm$ 2.9721.34 $\pm$ 0.74
VISUAL + $L_{loc}$28.15 $\pm$ 1.9428.62 $\pm$ 2.1322.60 $\pm$ 1.40
VISUAL + $L_{loc}$ + HT29.66 $\pm$ 2.1932.48 $\pm$ 1.3422.61 $\pm$ 0.48

By difficulty:

easymediumhard
VISUAL54.64 $\pm$ 2.5424.92 $\pm$ 1.1810.71 $\pm$ 1.33
VISUAL + $L_{loc}$57.10 $\pm$ 1.8425.68 $\pm$ 1.2412.72 $\pm$ 2.31
VISUAL + $L_{loc}$ + HT58.86 $\pm$ 2.5626.30 $\pm$ 1.4213.11 $\pm$ 2.15

Citation

If you find our work helpful in your research, please cite it as:

@inproceedings{jia2021intentonomy,
  title={Intentonomy: a Dataset and Study towards Human Intent Understanding},
  author={Jia, Menglin and Wu, Zuxuan and Reiter, Austin and Cardie, Claire and Belongie, Serge and Lim, Ser-Nam},
  booktitle={CVPR},
  year={2021}
}

Acknowledgement

We thank Luke Chesser and Timothy Carbone from Unsplash for providing the images, Kimberly Wilber and Bor-chun Chen for tips and suggestions about the annotation interface and annotator management, Kevin Musgrave for the general discussion, and anonymous reviewers for their valuable feedback.