Skip to content

Latest commit

 

History

History
70 lines (60 loc) · 5.6 KB

README.md

File metadata and controls

70 lines (60 loc) · 5.6 KB

MDPM: Mid-level Deep Pattern Mining

Introduction

This repository contains the source code of the algorithm described in a CVPR 2015 paper Mid-level Deep Pattern Mining and also a technical report Mining Mid-level Visual Patterns with Deep CNN Activations. More details are provided on the project page. This package has been tested using Matlab 2014a on a 64-bit Linux machine. This code is for research purposes only.

Citing MDPM

If you find MDPM useful in your research, please consider citing:

@inproceedings{LiLSH15CVPR,
    author = {Yao Li and Lingqiao Liu and Chunhua Shen and Anton van den Hengel},
    title = {Mid-level Deep Pattern Mining},
    booktitle = {CVPR},
    year = {2015}
    pages = {971-980},
}

Installing MDPM

  1. Prerequisites
  2. Caffe: install Caffe by following its installation instructions. Do not forget to run make matcaffe to compile Caffe's Matlab interface. You also need to download the ImageNet mean file (run get_ilsvrc_aux.sh from data/ilsvrc12 ). Note: As we only use Caffe CNN as a feature extractor, installing Caffe using the CPU mode is OK.
  3. CNN models. We use consider two CNN models in the experiment. The first one is BVLC Reference CaffeNet (CaffeRef for short), this model can be downloaded by running download_model_binary.py models/bvlc_reference_caffenet from scripts. The second is VGG 19-layer Very Deep model (VGGVD for short), which can be downloaded from here.
  4. Apriori algorithm: we use this implementation. Click the link to download this package. You need to uncompress it and run make to compile it in the apriori/apriori/src. Detailed usage of this package can be found here.
  5. Liblinear: download liblinear and compile it by following its instructions.
  6. KSVDS-Box v11: as we use the im2colstep function in this toolbox, you need to download and compile it (im2colstep is found in ksvdsbox11/private).
  7. Configuring MDPM
  8. Download MDPM: git clone https://github.com/yaoliUoA/MDPM.
  9. Download MIT Indoor dataset from here.
  10. Open init.m in the Matlab. Change values of sereval variables based on your configuration, including conf.pathToLiblinear, conf.pathToCaffe, conf.dataset and conf.imgDir based on your local configuration.
  11. Copy the executable file aprior under directory apriori/apriori/src and paste it under mining directory.
  12. Copy the mex file im2colstep and paste it under cnn directory.
  13. Running MDPM
  14. Run the demo.m. It should be working properly for MIT Indoor dataset if you have followed aforementioned instructions. Note that we have not released a demo for PASCAL VOC datasets as the dataset setting for VOC datasets is different.
  15. Important: It may takes some time to get the final classification result, so it is suggested to run MDPM on a cluster where jobs can be run in parallel. The *.sh scripts are provided to submit jobs on a cluster.

Pre-computed image features

We provide final image features generated by the proposed MDPM algorithm using different CNN models (CaffeRef or VGGVD). You should able to reproduce our result presented in the CVPR 2015 paper and technical report.

  1. MIT Indoor dataset
  2. feature_MITIndoor_CaffeRef and feature_MITIndoor_VGGVD. After uncompressing the downloaded file, copy the .mat files to data/MIT67/feaFinal_128_32_150 directory (create by yourself), you should be able to run classify.m under classify to reproduce the classification accuracy presented in the technical report.
  3. PASCAL VOC 2007 dataset
  4. feature_VOC2007_CaffeRef and feature_VOC2007_VGGVD. After uncompressing the downloaded file, copy the .mat files to data/VOC2007/feaFinal_128_32_150 directory (create by yourself), you should be able to run train_VOC.m and then test_VOC.m under classify to reproduce the mean average precision presented in the technical report.
  5. PASCAL VOC 2012 dataset
  6. feature_VOC2012_VGGVD. After uncompressing the downloaded file, copy the .mat files to data/VOC2012/feaFinal_128_32_150 directory (create by yourself), you should be able to run train_VOC.m and then test_VOC_txt.m under classify. The generated .txt files can be submitted to the evaluation server.

Feedback

If you have any issues (question, feedback) or find bugs in the code, please contact yao.li01@adelaide.edu.au.