Home

Awesome

Tracking by Natural Language Specification

Image

This repository contains the code for the following paper:

@article{li2017cvpr,
  title={Tracking by Natural Language Specification},
  author={Li, Zhenyang and Tao, Ran and Gavves, Efstratios and Snoek, Cees G. M. and Smeulders, Arnold W. M.},
  journal={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  year={2017}
}

Download Dataset

Lingual Lingual OTB99 Sentences

Lingual ImageNet Sentences

Please note that we use all the frames from original OTB100 dataset in our OTB99 videos, while for ImageNet videos we may only select a subsequence (see start/end frames we selected for each video in train.txt or test.txt).

How to use the demo code

Download and setup Caffe (our own branch)

  1. Caffe branch here (Note: langtrackV3 branch not master branch)
  2. Compile Caffe with option
WITH_PYTHON_LAYER = 1

Download pre-trained models

  1. Download natural language segmentation model caffemodel and copy to MAIN_PATH/snapshots/lang_high_res_seg/_iter_25000.caffemodel

  2. Download tracking model caffemodel and copy to MAIN_PATH/VGG16.v2.caffemodel

Run demo code

ipython notebook code

Here we first demostrate how the model II in the paper works with example videos:

  1. Given an image and a natural language query, how to identify a target (applied on the first query frame of a video only)
demo/lang_seg_demo.ipynb
  1. Given a visual target (a box identified from step 1) and a sequence of frames, how to track the object in all the frames
demo/lang_track_demo.ipynb