Home

Awesome

Is Second-order Information Helpful for Large-scale Visual Recognition?

Created by Jiangtao Xie and Peihua Li

<div> &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;<img src="doc/figures/MPN-COV.jpg" width="80%"/> </div>

Contents

  1. Introduction
  2. Classification results
  3. Implementation details
  4. Installation
  5. Usage
  6. Change log
  7. Other Implementations
  8. References
  9. Contact

Introduction

This repository contains the source code and models trained on ImageNet 2012 dataset for the following paper:

@article{Li2017,
    author = {Peihua Li,Jiangtao Xie,Qilong Wang and Wangmeng Zuo},
    title  = {Is Second-order Information Helpful for Large-scale Visual Recognition?},
    journal= {International Conference on Computer Vision (ICCV)},
    year   = {2017}
}

We proposed the second-order pooling to replace the common first-order, max/average pooling after the last conv. layer. The proposed networks, called MPN-COV ConvNets, achieved consistent, nontrivial improvements over their counterparts. The key to our method is Matrix Power Normalization of COVariance, which

  1. amounts to robust covariance estimation given a small number of large-dimensional features(a.k.a. small sample/large dimension), as commonly seen in the last convolutional layers in state-of-the-art ConvNets;

  2. appropriately exploits Riemannian geometry which allows zero eigenvalues, overcoming the downside of the well-known Log-Euclidean metric in this scenario.

    result

    • Figure 1: Error(%,10-crop) comparison of MPN-COV ConvNets with the counterparts. We can see our method can improve the performance of top-1 1.6% ~ 6.8%,and top-5 1.0% ~ 4.0%.

You can visit our project page for more details.

Classification results

Classification results(top-1/top-5 error rates, %) on ImageNet 2012 validation set

Network224x224<br />1-crop224x224<br />10-cropGoogleDriveBaiduCloud
MPN-COV-ResNet-5022.27/6.3521.16/5.58186.8MB186.8MB
MPN-COV-ResNet-10121.17/5.7019.71/5.01270.7MB270.7MB
MPN-COV-AlexNet38.37/17.1434.97/14.60567.0MB567.0MB
MPN-COV-VGG-M34.63/14.6431.81/12.52581.6MB581.6MB
MPN-COV-VGG-1626.55/8.9424.68/7.75614.0MB614.0MB

Implementation details

We developed our programs based on MatConvNet and Matlab 2015b, running under either Ubuntu 14.04.5 LTS and Windows 7. To implement MPN-COV layer, we adopt the eigenvalue decomposition algorithm on CPU in single-precision format, as its GPU version on the CUDA platform is much slower. Except for eigenvalue decomposition, all other operations in forward and backward propagations are performed using C++ on GPU. While writing code, we follow the convention of MatConvNet as closely as possible.

Created and Modified

  1. Files we created to implement MPN-COV layer
└── matconvnet_root_dir
    └── matlab
        ├── src
        │   ├── bits
        │   │   ├── impl
        │   │   │   ├── blashelper_cpu.hpp
        │   │   │   ├── blashelper_gpu.hpp
        │   │   │   ├── mpn_cov_cpu.cpp
        │   │   │   ├── mpn_cov_gpu.cu
        │   │   │   └── nnmpn_cov_blas.hpp
        │   │   ├── nnmpn_cov.cpp
        │   │   ├── nnmpn_cov.cu
        │   │   └── nnmpn_cov.hpp
        │   ├── vl_nnmpn_cov.cpp
        │   └── vl_nnmpn_cov.cu
        ├── +dagnn
        │   └── MPN_COV_Pool_C.m
        └── EIG.m
  1. Files we modified to support MPN-COV layer
└── matconvnet_root_dir
    └── matlab
        ├── vl_compilenn.m
        └── simplenn
            └── vl_simplenn.m

Installation

  1. We package our programs and demos in MatConvNet toolkit,you can download this PACKAGE directly, or in your Terminal type:
   >> git clone https://github.com/jiangtaoxie/MPN-COV

  1. Then you can follow the tutorial of MatConvNet's installation guide to complile, for example:
   >> vl_compilenn('enableGpu', true, ...
                   'cudaRoot', '/Developer/NVIDIA/CUDA-8.0', ...
                   'cudaMethod', 'nvcc', ...
                   'enableCudnn', true, ...
                   'cudnnRoot', 'local/cudnn-rc4') ;

  1. Currently, we use MatConvNet 1.0-beta22. For newer versions, please consult the MatConvNet website.

Usage

Insert MPN-COV layer into your network

  1. Under SimpleNN Framework
   net.layers{end+1} = struct('type','mpn_cov',...
                              'name','mpn_cov_pool',...
                              'method',[],...
                              'regu_method','power',...
                              'alpha', 0.5,...
                              'epsilon', 0);
  1. Under DagNN Framework
   name = 'mpn_cov_pool';
   net.addLayer(name , ...
                dagnn.MPN_COV_Pool_C('method', [],...
                                    'regu_method', 'power', ...  
                                    'alpha', 0.5,...
                                    'epsilon', 0), ...
                                    lastAdded.var, ...
                                    {name, [name, '_aux_S'], [name, '_aux_V'],[name,'_aux_D']});
   lastAdded.var = name;

In our demo code, we implement MPN-COV AlexNet, VGG-M and VGG-VD under SimpleNN framework, and MPN-COV ResNet under DagNN framework.

Arguments descriptions

  1. 'method': It is reserved for future use.
  2. 'regu_method': We introduced three normalization methods in the paper, namely, MPN-COV,MPN-COV+matrix-l2,MPN-COV+matrix-Fro. As the latter two normalizations produced unsatisfactory performance, we only support MPN-COV, designated by 'power'.
  3. 'alpha': It denotes the exponent of matrix power function(equivalently, the power of eigenvalues, see the paper), whose values should be positive. The default value is 0.5 producing the best performance.
  4. 'epsilon': It is a small positive number added to eigenvalues of covariance matrices. It is set to 0 as the Power-E metric allows the eigenvalue to be non-negative.

Change log

Other Implementations

  1. Caffe Implementation(Coming soon)
  2. TensorFlow Implemention(Coming soon)

References

[1] C. Ionescu, O. Vantzos, and C. Sminchisescu. Matrix backpropagation for deep networks with structured layers. In ICCV, 2015.

Contact

If you have any questions or suggestions, please contact us

jiangtaoxie@mail.dlut.edu.cn