Skip to content

ZYYSny/Selective-Joint-Fine-tuning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

92 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Selective Joint Fine-tuning

By [Weifeng Ge], Yizhou Yu

Department of Computer Science, The University of Hong Kong

Table of Contents

  1. Introduction
  2. Citation
  3. Pipeline
  4. Codes and Installation
  5. Models
  6. Results

Introduction

This repository contains the codes and models described in the paper "Borrowing Treasures from the Wealthy: Deep Transfer Learning through Selective Joint Fine-tuning"(https://arxiv.org/abs/1702.08690). These models are those used in Stanford Dogs 120, Oxford Flowers 102, Caltech 256 and MIT Indoor 67.

Note

  1. All algorithms are implemented based on the deep learning framework Caffe.
  2. Please add the additional layers used into your own Caffe to run the training codes.

Citation

If you use these codes and models in your research, please cite:

   @InProceedings{Ge_2017_CVPR,
           author = {Ge, Weifeng and Yu, Yizhou},
           title = {Borrowing Treasures From the Wealthy: Deep Transfer Learning Through Selective Joint Fine-Tuning},
           booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
           month = {July},
           year = {2017}
   }

Pipeline

  1. Pipeline of the proposed selective joint fine-tuning: Selective Joint Fine-tuning Pipeline

Codes and Installation

  1. Add new layers into Caffe:

  2. Image Retrieval:

  3. Selective Joint Fine-tuning:

Models

  1. Visualizations of network structures (tools from ethereon):

  2. Model files:

Results

  1. Multi crop testing accuracy on Stanford Dogs 120 (in the same manner with that in VGG-net):

    Method mean Accuracy(%)
    HAR-CNN 49.4
    Local Alignment 57.0
    Multi Scale Metric Learning 70.3
    MagNet 75.1
    Web Data + Original Data 85.9
    Target Only Training from Scratch 53.8
    Selective Joint Training from Scratch 83.4
    Fine-tuning w/o source domain 80.4
    Selective Joint FT with all source samples 85.6
    Selective Joint FT with random source samples 85.5
    Selective Joint FT w/o iterative NN retrieval 88.3
    Selective Joint FT with Gabor filter bank 87.5
    Selective Joint FT 90.2
    Selective Joint FT with Model Fusion 90.3
  2. Multi crop testing accuracy on Oxford Flowers 102 (in the same manner with that in VGG-net):

    Method mean Accuracy(%)
    MPP 91.3
    Multi-model Feature Concat 91.3
    MagNet 91.4
    VGG-19 + GoogleNet + AlexNet 94.5
    Target Only Training from Scratch 58.2
    Selective Joint Training from Scratch 80.6
    Fine-tuning w/o source domain 90.2
    Selective Joint FT with all source samples 93.4
    Selective Joint FT with random source samples 93.2
    Selective Joint FT w/o iterative NN retrieval 94.2
    Selective Joint FT with Gabor filter bank 93.8
    Selective Joint FT 94.7
    Selective Joint FT with Model Fusion 95.8
    VGG-19 + Part Constellation Model 95.3
    Selective Joint FT with val set 97.0
  3. Multi crop testing accuracy on Caltech 256 (in the same manner with that in VGG-net):

    Method mean Acc(%) 15/class mean Acc(%) 30/class mean Acc(%) 45/class mean Acc(%) 60/class
    M-HMP 40.5 ± 0.4 48.0 ± 0.2 51.9 ± 0.2 55.2 ± 0.3
    Z.&F. Net 65.7 ± 0.2 70.6 ± 0.2 72.7 ± 0.4 74.2 ± 0.3
    VGG-19 - - - 85.1 ± 0.3
    VGG-19 + GoogleNet + AlexNet - - - 86.1
    VGG-19 + VGG-16 - - - 86.2 ± 0.3
    Fine-tuning w/o source domain 76.4 ± 0.1 81.2 ± 0.2 83.5 ± 0.2 86.4 ± 0.3
    Selective Joint FT 80.5 ± 0.3 83.8 ± 0.5 87.0 ± 0.1 89.1 ± 0.2
  4. Multi crop testing accuracy on MIT Indoor 67 (in the same manner with that in VGG-net):

    Method mean Accuracy(%)
    MetaObject-CNN 78.9
    MPP + DFSL 80.8
    VGG-19 + FV 81.0
    VGG-19 + GoogleNet 84.7
    Multi Scale + Multi Model Ensemble 86.0
    Fine-tuning w/o source domain 81.7
    Selective Joint FT with ImageNet 82.8
    Selective Joint FT with Places 85.8
    Selective Joint FT with hybrid data 85.5
    Average the output of Places and hybrid data 86.9

About

Codes and models for the CVPR 2017 spotlight paper "Borrowing Treasures from the Wealthy: Deep Transfer Learning through Selective Joint Fine-tuning".

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published