Awesome
Phrase Localization and Visual Relationship Detection with Comprehensive Image-Language Cues
pl-clc contains the implementation for our paper which has several implementation improvements over the initial arXiv submission. If you find this code useful in your research, please consider citing:
@inproceedings{plummerPLCLC2017,
Author = {Bryan A. Plummer and Arun Mallya and Christopher M. Cervantes and Julia Hockenmaier and Svetlana Lazebnik},
Title = {Phrase Localization and Visual Relationship Detection with Comprehensive Image-Language Cues},
booktitle = {ICCV},
Year = {2017}
}
Phrase Localization Evaluation Demo
This code was tested using Matlab R2016a on a system with Ubuntu 14.04.
-
Clone the pl-clc repository
git clone --recursive https://github.com/BryanPlummer/pl-clc.git
-
Follow installation requirements for external code which includes:
On the system this code was tested on only Caffe (in Faster RCNN) and LIBSVM required any compiling to use the evaluation script.
-
Optional, download the Stanford Parser, putting the code in the
external
folder naming itstanford-parser
. Note that the version of the Stanford Parser used for the precomputed data was 3.4.1. -
Download the precomputed data (8.3G): pl-clc models
-
Get the Flickr30k Entities dataset and put it in the
datasets
folder. The code also assumes the images have been placed indatasets/Flickr30kEntities/Images
. -
After unpacking the precomputed data you can run our evaluation code
>> evalAllCuesFlickr30K
This step took about 45 minutes using a single Tesla K40 GPU on a system with an Intel(R) Xeon(R) CPU E5-2687W v2 processor.
Training new models
There are example scripts that was used to create all the precomputed data in the trainScripts
folder. Training these models from scratch requires about 100G of memory. This can be reduced by simply removing some parfor loops, but training the CCA model requires about 70G memory by itself.