Home

Awesome

DCR

This repo contains the official implementation of paper

Learning to Anticipate Future with Dynamic Context Removal
Xinyu Xu, Yong-Lu Li, Cewu Lu.

In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2022.

[arxiv] [code] [model]


Data Preparation

We reorganize the annotation files of four datasets[1-4] in data folder.
You need to download pre-extracted feature into data/feature folder.
TSN feature can be downloaded from Link [5].
irCSN-152 feature can be downloaded from Link [6].
We provide a stronger TSM backbone, feature can be be downloaded from Link.

Packages

We conduct experiments in the following environment

python == 3.9

torch == 1.9
torchvision == 0.10.0
apex == 0.1.0
tensorboardX
yacs
pyyaml
numpy
prefetch_generator

Evaluation

We release pre-trained models at here.
To test the performance of our model, for example using RGB-TSM backbone on EPIC-KITCHENS-100[1], you can run the following command.

python eval.py --cfg configs/EK100RGBTSM/eval.yaml --resume ./weights/EK100RGBTSM.pt

Here ./weights/EK100RGBTSM.pt is the path to the pre-trained model you downloaded.

To do the late fusion, you need to store the predicted results of each model first, then run the fusion script. For example

python eval_and_extract.py --cfg configs/EK100RGBTSM/eval.yaml --resume ./weights/EK100RGBTSM.pt

python fuse/fuse_EK100.py

The following is expected validation set performace.

EPIC-KITCHENS-100
MethodOverallUnseenTail
RULSTM14.014.111.1
ActionBanks14.714.511.8
TransAction16.613.815.5
AVT15.911.914.1
DCR18.314.715.8
EPIC-KITCHENS-55
MethodTop-1Top-5
ATSN-16.3
ED-25.8
MCE-26.1
RULSTM15.335.3
FHOI10.425.5
ImagineRNN-35.6
ActionBanks15.135.6
Ego-OMG19.2-
AVT16.637.6
DCR19.241.2
EGTEA GAZE+

We continue to work on EGTEA GAZE+ to improve models.

MethodTop-5Recall@5
DMR55.738.1
ATSN40.531.6
NCE56.343.8
TCN58.547.1
ED60.254.6
RL62.752.2
EL63.855.1
RULSTM66.458.6
DCR(Updated)67.961.3

The EPIC-KITCHENS test set files are at here.

More results can be found in Model Zoo.

Training

Taking the same setting as an example, to reproduce the training process, you can run

python train_order.py --cfg configs/EK100RGBTSM/order.yaml --name order

python train.py --cfg configs/EK100RGBTSM/train.yaml --name train --resume exp/EK100RGBTSM/order/epoch_50.pt

The first line runs our frame order pre-training stage. The model will be stored in exp/EK100RGBTSM/order/epoch_50.pt. The second line reloads the pre-trained model and runs the anticipation training stage.

Only to do the anticipation training from scratch is also possible by

python train.py --cfg configs/EK100RGBTSM/train.yaml --name train 

Citation

If you find our paper or code helpful, please cite our paper.

@inproceedings{xu2022learning,
  title={Learning to Anticipate Future with Dynamic Context Removal },
  author={Xu, Xinyu and Li, Yong-Lu and Lu, Cewu},
  booktitle={CVPR},
  year={2022}
}

Reference

[1] Dima Damen, Hazel Doughty, Giovanni Maria Farinella, Antonino Furnari, Jian Ma, Evangelos Kazakos, Davide Moltisanti, Jonathan Munro, Toby Perrett, Will Price, and Michael Wray. Rescaling egocentric vision: Collection, pipeline and challenges for epic-kitchens-100. International Journal of Computer Vision (IJCV), 2021.

[2] Dima Damen, Hazel Doughty, Giovanni Maria Farinella, Sanja Fidler, Antonino Furnari, Evangelos Kazakos, Davide Moltisanti, Jonathan Munro, Toby Perrett, Will Price, et al. Scaling egocentric vision: The epic-kitchens dataset. In Proceedings of the European Conference on Computer Vision (ECCV), pages 720–736, 2018.

[3] Yin Li, Miao Liu, and James M. Rehg. In the eye of beholder: Joint learning of gaze and actions in first person video. In Proceedings of the European Conference on Computer Vision (ECCV), September 2018.

[4] Sebastian Stein and Stephen J McKenna. Combining embedded accelerometers with computer vision for recognizing food preparation activities. In Proceedings of the 2013 ACM international joint conference on Pervasive and ubiquitous computing, pages 729–738, 2013.

[5] Antonino Furnari and Giovanni Farinella. Rolling-unrolling lstms for action anticipation from first-person video. IEEE transactions on pattern analysis and machine intelligence, 2020.

[6] Rohit Girdhar and Kristen Grauman. Anticipative Video Transformer. In ICCV, 2021.