Skip to content
Code for paper Sketch Me That Shoe
Branch: master
Clone or download
Latest commit 0997478 Apr 28, 2018
Type Name Latest commit message Commit time
Failed to load latest commit information.
caffe_sbir initial commit May 20, 2016
data keep dir structure May 20, 2016
experiments Update train.prototxt Aug 16, 2016
lib initial commit May 20, 2016
370.jpg Add files via upload Aug 17, 2016
bashsbir initial commit May 20, 2016 Update Jun 26, 2016

Sketch Me That Shoe


This repository contains the code for the CVPR paper ‘Sketch Me That Shoe’, which is a deep learning based implementation of fine-grained sketch-based image retrieval.

For more details, please visit our project page:

New: Tensorflow implementation can be found here:

And if you use the code for your research, please cite our paper:

    Author = {Qian Yu, Feng Liu, Yi-Zhe Song, Tao Xiang, Timothy M. Hospedales and Chen Change Loy},
    Title = {Sketch Me That Shoe},
    Booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    Year = {2016}


  1. License

  2. Installation

  3. Run the demo

  4. Re-training the model

  5. Extra comment

###License MIT License


  1. Download the repository

    git clone
  2. Build Caffe and pycaffe

    a. Go to folder $SBIR_ROOT/caffe_sbir

    b. modify the path in Makefile.config, to use this code, you have to compile with python layer


    c. Compile caffe shell make –j32 && make pycaffe

  3. Go to fold $SBIR_ROOT, and run

    source bashsbir

###Run the demo

  1. To run the demo, please first download our database and models. Go to the root folder of this project, and run

    chmod +x

Note: You can also download them manually from our project page:

  1. Run the demo:
python $SBIR_ROOT/tools/

###Re-training the model

  1. Go to the root folder of this project

    cd $SBIR_ROOT
  2. Run the command


Note: Please make sure the initial model ‘/init/sketchnet_init.caffemodel’ be under the folder experiments/. This initial model can be downloaded from our project page.

###Extra comment

  1. All provided models and codes are optimised version. And our latest result is shown below:

    Dataset acc.@1 acc.@10 %corr.
    Shoes 52.17% 92.17% 72.29%
    Chairs 72.16% 98.96% 74.36%

Further explanation: The model we reported in our paper is trained by our originally collected sketches which contain much noise. In order to improve usability, we cleaned the sketch images(removed some noise) after CVPR2016 deadline. You can compare images 'test_shoes_370.png' and '370.jpg' (or 'test_chairs_230.png'/'230.jpg') to see the difference. We re-trained our model using clean sketch images and the new results are listed above. Both the model and dataset we released now is the latest version. Sorry for any confusion we may bring about. If you have further questions, please email

  1. This project used codes of the following project:

    Caffe trainnet python wrapper and python data layer

    L2 normalization layer

    Triplet loss

You can’t perform that action at this time.