Awesome
Gaze-Attention
Integrating Human Gaze into Attention for Egocentric Activity Recognition (WACV 2021)
paper | presentation
Overview
It is well known that human gaze carries significant information about visual attention. In this work, we introduce an effective probabilistic approach to integrate human gaze into spatiotemporal attention for egocentric activity recognition. Specifically, we propose to reformulate the discrete training objective so that it can be optimized using an unbiased gradient estimator. It is empirically shown that our gaze-combined attention mechanism leads to a significant improvement of activity recognition performance on egocentric videos by providing additional cues across space and time.
Method | Backbone network | Acc(%) | Acc*(%) |
---|---|---|---|
Li et al. | I3D | 53.30 | - |
Sudhakaran et al. | ResNet34+LSTM | - | 60.76 |
LSTA | ResNet34+LSTM | - | 61.86 |
MCN | I3D | 55.63 | - |
Kapidis et al. | MFNet | 59.44 | 66.59 |
Lu et al. | I3D | 60.54 | 68.60 |
Ours (reported) | I3D | 62.84 | 69.58 |
Ours (updated) | I3D | 63.09 | 69.73 |
Direct Optimization through argmax
Direct optimization (NeurIPS 2019) was originally proposed for learning a variational auto-encoder (VAE) with discrete latent variables. Unlike Gumbel-Softmax reparameterization technique, the direct optimization method introduces an unbiased gradient estimator for the discrete VAE that can be used even under the high-dimensional structured latent spaces. We demonstrate that our problem can be optimized effectively using the direct optimization method.
Visualization
We use Grad-CAM++ to visualize the spatiotemporal responses of the last convolutional layer to see how the gaze integration affects the top-down attention of the two networks. We can observe that our model is better at attending activity-related objects or regions. Specifically, our model is more sensitive to the target objects.
Code Usage
First, clone this repository and prepare the EGTEA dataset.
Then, download these four weight files: i3d_both_base.pt, i3d_iga_best1_base.pt, i3d_iga_best1_gaze.pt, i3d_iga_best1_attn.pt.
Finally, put these files in the weights
folder and just run
$ python main.py --mode test
This will reproduce the results reported in the paper. You can also train the model by running
$ python main.py --mode train --ngpu 4 --weight weights/i3d_both_base.pt
Notes
- We performed all the experiments with Python 3.6 and PyTorch 1.6.0 on 4 GPUs (TITAN Xp).
Citation
@inproceedings{min2020integrating,
title={Integrating Human Gaze into Attention for Egocentric Activity Recognition},
author={Min, Kyle and Corso, Jason J},
booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
pages={1069--1078}
}