Home

Awesome

Human-Attention-in-Image-Captioning-Dataset-and-Analysis-ICCV-2019

Introduction

This is the github page for our ICCV 2019 paper (link). We provide the link to the data collected in the paper: picture capgaze1: contains 1000 images, and raw data (eye-fixations, verbal description as well as the transcribed text description) from 5 native English speakers. This part of data was used for the analysis. For data privacy reason, the voice of the verbal description was converted by a masking process (pitch modulation, the content was preserved).

capgaze2: contains 3000 images, and processed data (we combined all the eye-fixations from different people for each image into a fixation map). This part of data was used for developing saliency prediction model under the image captioning task.

Also we give the code for extracting the information in the collected data in the demo folder (an example in the demo.ipynb).

Contact

senhe752@gmail.com