Home

Awesome

TextTopicNet

Self-supervised learning of visual features through embedding images into text topic spaces, CVPR 2017.

L. Gomez*, Y. Patel*, M. Rusiñol, D. Karatzas, C.V. Jawahar.

In this paper we present a method that is able to take advantage of freely available multi-modal content to train computer vision algorithms without human supervision. We put forward the idea of performing self-supervised learning of visual features by mining a large scale corpus of multi-modal (text and image) documents. We show that discriminative visual features can be learnt efficiently by training a CNN to predict the semantic context in which a particular image is more probable to appear as an illustration. For this we leverage the hidden semantic structures discovered by a well-known topic modeling technique over the text corpus.

TextTopicNet diagram

Our CNN learns to predict the semantic context in which images appear as illustration. Given an illustrated article we project its textual information into the topic-probability space provided by the LDA topic modeling framework. Then we use this semantic level representation as the supervisory signal for CNN training.

If you make use of the code in this repository for scientific purposes, we appreciate it if you cite our paper:

@inproceedings{gomez2017self,
  title={Self-supervised learning of visual features through embedding images into text topic spaces},
  author={Gomez, Lluis and Patel, Yash and Rusinol, Mar{\c{c}}al and Karatzas, Dimosthenis and Jawahar, CV},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  pages={4230--4239},
  year={2017}
}

Pretrained models

TextTopicNet_Wikipedia_ImageCLEF_40Topics.caffemodel

TextTopicNet_finetuned_VOC2007.caffemodel

Training from scratch

Set up data

Download and uncompress the Wikipedia Retrieval 2010 Collection and The PASCAL Visual Object Classes Challenge 2007 datasets into the data/ImageCLEF_Wikipedia/ and data/VOC2007/ folders respectively. Follow the specific instructions provided in:

data/ImageCLEF_Wikipedia/README.md

data/VOC2007/README.md

Learn LDA model

Run the following commands in order to process the data and learn the LDA model on the text corpus:

cd LDA
python generate_train_dict.py
python learn_LDA_model.py
python generate_train_labels.py

This will generate the LDA model files (ldamodel40.lda and ldamodel40.lda.state) that can be used to project new texts into the learned topic space; and a file training_labels40.json with the trainig labels (topic space probabilities) for all images in our training dataset. If you want to skip this step and directly train the CNN model you can download the files from the following links: ldamodel40.lda / ldamodel40.lda.state / dictionary.dict / training_labels40.json

Train CNNs

Run the following commands to train the CaffeNet model from scratch:

cd CNN/CaffeNet
export PYTHONPATH=./:$PYTHONPATH
caffe train --solver solver.prototxt

Once trained, the CNN model can be used for image classification in two different ways:

The same model can be used for the following multi-modal retrieval tasks:

Experiments

The scripts in the experiments/ folder reproduce the results of the experiments in our paper.