Awesome
Caffe-ExcitationBP-RNNs
This is a Caffe implementation of Excitation Backprop for RNNs described in
This software implementation is provided for academic research and non-commercial purposes only. This implementation is provided without warranty.
Prerequisites
- The same prerequisites as Caffe
- Anaconda (python packages)
Quick Start
- Unzip the files to a local folder (denoted as root_folder).
- Enter the root_folder and compile the code the same way as in Caffe.
- Our code is tested in GPU mode, so make sure to activate the GPU code when compiling the code.
- Make sure to compile pycaffe, the python interface
- Enter root_folder/excitationBP-RNNs, run demo.ipynb using the python notebook. It will show you how to compute the spatiotemporal saliency maps of a video, and includes the examples in the demo video. For details of running the python notebook remotely on a server, see here.
Other comments
- We implemented both GPU and CPU versions of Excitation Backprop for RNNs. Change
caffe.set_mode_eb_gpu()
tocaffe.set_mode_eb_cpu()
to run the CPU version. - You can download a pre-trained action recognition model at this link. The model must be placed in the folder root_folder/models/VGG16_LSTM/
- To apply your own CNN-LSTM model, you need to modify root_folder/models/VGG16_LSTM/deploy.prototxt. You need to add a dummy loss layer at the end of the file.
Reference
@InProceedings{Bargal_2018_CVPR,
author = {Adel Bargal, Sarah and Zunino, Andrea and Kim, Donghyun and Zhang, Jianming and Murino, Vittorio and Sclaroff, Stan},
title = {Excitation Backprop for RNNs},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}