Home

Awesome

Intermediate CNN Features

This repository contains the implementation of the feature extraction process described in Near-Duplicate Video Retrieval by Aggregating Intermediate CNN Layers. Given an input video, one frame per second is sampled and its visual descriptor is extracted from the activations of the intermediate convolution layers of a pre-trained Convolutional Neural Network. Then, the Maximum Activation of Convolutions (MAC) function is applied on the activation of each layer to generate a compact layer vector. Finally, the layer vector are concatenated to generate a single frame descriptor. The feature extraction process is depicted in the following figure.

<img src="https://raw.githubusercontent.com/MKLab-ITI/intermediate-cnn-features/develop/feature_extraction.png" width="60%">

Prerequisites

Getting started

Installation

git clone https://github.com/MKLab-ITI/intermediate-cnn-features
cd intermediate-cnn-features
pip install -r requirements.txt

Feature Extraction

python feature_extraction.py --video_list <video_file> --network googlenet --framework caffe --output_path test/ --prototxt bvlc_googlenet/deploy.prototxt --caffemodel bvlc_googlenet/bvlc_googlenet.caffemodel
python feature_extraction.py --image_list <image_file> --network vgg --framework tensorflow --output_path test/ --tf_model slim/vgg_16.ckpt

Citation

If you use this code for your research, please cite our paper.

@inproceedings{kordopatis2017near,
  title={Near-Duplicate Video Retrieval by Aggregating Intermediate CNN Layers},
  author={Kordopatis-Zilos, Giorgos and Papadopoulos, Symeon and Patras, Ioannis and Kompatsiaris, Yiannis},
  booktitle={International Conference on Multimedia Modeling},
  year={2017}
}

Related Projects

ViSiL NDVR-DML FIVR-200K

License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details

Contact for further details about the project

Giorgos Kordopatis-Zilos (georgekordopatis@iti.gr) <br> Symeon Papadopoulos (papadop@iti.gr)