Awesome
MDPM: Mid-level Deep Pattern Mining
Introduction
This repository contains the source code of the algorithm described in a CVPR 2015 paper Mid-level Deep Pattern Mining and also a technical report Mining Mid-level Visual Patterns with Deep CNN Activations. More details are provided on the project page. This package has been tested using Matlab 2014a on a 64-bit Linux machine. This code is for research purposes only.
Citing MDPM
If you find MDPM useful in your research, please consider citing:
@inproceedings{LiLSH15CVPR,
author = {Yao Li and Lingqiao Liu and Chunhua Shen and Anton van den Hengel},
title = {Mid-level Deep Pattern Mining},
booktitle = {CVPR},
year = {2015}
pages = {971-980},
}
Installing MDPM
- Prerequisites
- Caffe: install Caffe by following its installation instructions.
Do not forget to run
make matcaffe
to compile Caffe's Matlab interface. You also need to download the ImageNet mean file (runget_ilsvrc_aux.sh
fromdata/ilsvrc12
). Note: As we only use Caffe CNN as a feature extractor, installing Caffe using the CPU mode is OK. - CNN models. We use consider two CNN models in the experiment. The first one is BVLC Reference CaffeNet (CaffeRef for short),
this model can be downloaded by running
download_model_binary.py models/bvlc_reference_caffenet
fromscripts
. The second is VGG 19-layer Very Deep model (VGGVD for short), which can be downloaded from here. - Apriori algorithm: we use this implementation. Click the link to download this package. You need
to uncompress it and run
make
to compile it in theapriori/apriori/src
. Detailed usage of this package can be found here. - Liblinear: download liblinear and compile it by following its instructions.
- KSVDS-Box v11: as we use the
im2colstep
function in this toolbox, you need to download and compile it (im2colstep
is found inksvdsbox11/private
). - Configuring MDPM
- Download MDPM:
git clone https://github.com/yaoliUoA/MDPM
. - Download MIT Indoor dataset from here.
- Open
init.m
in the Matlab. Change values of sereval variables based on your configuration, includingconf.pathToLiblinear
,conf.pathToCaffe
,conf.dataset
andconf.imgDir
based on your local configuration. - Copy the executable file
aprior
under directoryapriori/apriori/src
and paste it undermining
directory. - Copy the mex file
im2colstep
and paste it undercnn
directory. - Running MDPM
- Run the
demo.m
. It should be working properly for MIT Indoor dataset if you have followed aforementioned instructions. Note that we have not released a demo for PASCAL VOC datasets as the dataset setting for VOC datasets is different. - Important: It may takes some time to get the final classification result, so it is suggested to run MDPM on a cluster
where jobs can be run in parallel. The
*.sh
scripts are provided to submit jobs on a cluster.
Pre-computed image features
We provide final image features generated by the proposed MDPM algorithm using different CNN models (CaffeRef or VGGVD). You should able to reproduce our result presented in the CVPR 2015 paper and technical report.
- MIT Indoor dataset
- feature_MITIndoor_CaffeRef and
feature_MITIndoor_VGGVD.
After uncompressing the downloaded file, copy the
.mat
files todata/MIT67/feaFinal_128_32_150
directory (create by yourself), you should be able to runclassify.m
underclassify
to reproduce the classification accuracy presented in the technical report. - PASCAL VOC 2007 dataset
- feature_VOC2007_CaffeRef and
feature_VOC2007_VGGVD.
After uncompressing the downloaded file, copy the
.mat
files todata/VOC2007/feaFinal_128_32_150
directory (create by yourself), you should be able to runtrain_VOC.m
and thentest_VOC.m
underclassify
to reproduce the mean average precision presented in the technical report. - PASCAL VOC 2012 dataset
- feature_VOC2012_VGGVD.
After uncompressing the downloaded file, copy the
.mat
files todata/VOC2012/feaFinal_128_32_150
directory (create by yourself), you should be able to runtrain_VOC.m
and thentest_VOC_txt.m
underclassify
. The generated .txt files can be submitted to the evaluation server.
Feedback
If you have any issues (question, feedback) or find bugs in the code, please contact yao.li01@adelaide.edu.au.