Home

Awesome

Selective Joint Fine-tuning

By [Weifeng Ge], Yizhou Yu

Department of Computer Science, The University of Hong Kong

Table of Contents

  1. Introduction
  2. Citation
  3. Pipeline
  4. Codes and Installation
  5. Models
  6. Results

Introduction

This repository contains the codes and models described in the paper "Borrowing Treasures from the Wealthy: Deep Transfer Learning through Selective Joint Fine-tuning"(https://arxiv.org/abs/1702.08690). These models are those used in Stanford Dogs 120, Oxford Flowers 102, Caltech 256 and MIT Indoor 67.

Note

  1. All algorithms are implemented based on the deep learning framework Caffe.
  2. Please add the additional layers used into your own Caffe to run the training codes.

Citation

If you use these codes and models in your research, please cite:

   @InProceedings{Ge_2017_CVPR,
           author = {Ge, Weifeng and Yu, Yizhou},
           title = {Borrowing Treasures From the Wealthy: Deep Transfer Learning Through Selective Joint Fine-Tuning},
           booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
           month = {July},
           year = {2017}
   }

Pipeline

  1. Pipeline of the proposed selective joint fine-tuning: Selective Joint Fine-tuning Pipeline

Codes and Installation

  1. Add new layers into Caffe:

  2. Image Retrieval:

  3. Selective Joint Fine-tuning:

Models

  1. Visualizations of network structures (tools from ethereon):

  2. Model files:

Results

  1. Multi crop testing accuracy on Stanford Dogs 120 (in the same manner with that in VGG-net):

    Methodmean Accuracy(%)
    HAR-CNN49.4
    Local Alignment57.0
    Multi Scale Metric Learning70.3
    MagNet75.1
    Web Data + Original Data85.9
    Target Only Training from Scratch53.8
    Selective Joint Training from Scratch83.4
    Fine-tuning w/o source domain80.4
    Selective Joint FT with all source samples85.6
    Selective Joint FT with random source samples85.5
    Selective Joint FT w/o iterative NN retrieval88.3
    Selective Joint FT with Gabor filter bank87.5
    Selective Joint FT90.2
    Selective Joint FT with Model Fusion90.3
  2. Multi crop testing accuracy on Oxford Flowers 102 (in the same manner with that in VGG-net):

    Methodmean Accuracy(%)
    MPP91.3
    Multi-model Feature Concat91.3
    MagNet91.4
    VGG-19 + GoogleNet + AlexNet94.5
    Target Only Training from Scratch58.2
    Selective Joint Training from Scratch80.6
    Fine-tuning w/o source domain90.2
    Selective Joint FT with all source samples93.4
    Selective Joint FT with random source samples93.2
    Selective Joint FT w/o iterative NN retrieval94.2
    Selective Joint FT with Gabor filter bank93.8
    Selective Joint FT94.7
    Selective Joint FT with Model Fusion95.8
    VGG-19 + Part Constellation Model95.3
    Selective Joint FT with val set97.0
  3. Multi crop testing accuracy on Caltech 256 (in the same manner with that in VGG-net):

    Methodmean Acc(%) 15/classmean Acc(%) 30/classmean Acc(%) 45/classmean Acc(%) 60/class
    M-HMP40.5 ± 0.448.0 ± 0.251.9 ± 0.255.2 ± 0.3
    Z.&F. Net65.7 ± 0.270.6 ± 0.272.7 ± 0.474.2 ± 0.3
    VGG-19---85.1 ± 0.3
    VGG-19 + GoogleNet + AlexNet---86.1
    VGG-19 + VGG-16---86.2 ± 0.3
    Fine-tuning w/o source domain76.4 ± 0.181.2 ± 0.283.5 ± 0.286.4 ± 0.3
    Selective Joint FT80.5 ± 0.383.8 ± 0.587.0 ± 0.189.1 ± 0.2
  4. Multi crop testing accuracy on MIT Indoor 67 (in the same manner with that in VGG-net):

    Methodmean Accuracy(%)
    MetaObject-CNN78.9
    MPP + DFSL80.8
    VGG-19 + FV81.0
    VGG-19 + GoogleNet84.7
    Multi Scale + Multi Model Ensemble86.0
    Fine-tuning w/o source domain81.7
    Selective Joint FT with ImageNet82.8
    Selective Joint FT with Places85.8
    Selective Joint FT with hybrid data85.5
    Average the output of Places and hybrid data86.9